aGrUM  0.20.3
a C++ library for (probabilistic) graphical models
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine > Class Template Reference

<agrum/CN/CNMonteCarloSampling.h> More...

#include <CNMonteCarloSampling.h>

+ Inheritance diagram for gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >:
+ Collaboration diagram for gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >:

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time. More...
 
Signaler1< std::string > onStop
 Criteria messageApproximationScheme. More...
 

Public Member Functions

virtual void insertEvidenceFile (const std::string &path)
 unsigned int notOptDelete; More...
 
Constructors / Destructors
 CNMonteCarloSampling (const CredalNet< GUM_SCALAR > &credalNet)
 Constructor. More...
 
virtual ~CNMonteCarloSampling ()
 Destructor. More...
 
Public algorithm methods
void makeInference ()
 Starts the inference. More...
 
Post-inference methods
virtual void eraseAllEvidence ()
 Erase all inference related data to perform another one. More...
 
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet. More...
 
const CredalNet< GUM_SCALAR > & credalNet () const
 Get this creadal network. More...
 
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the t0_ cluster. More...
 
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the t1_ cluster. More...
 
void setRepetitiveInd (const bool repetitive)
 
void storeVertices (const bool value)
 
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms. More...
 
void storeBNOpt (const bool value)
 
bool storeBNOpt () const
 
bool repetitiveInd () const
 Get the current independence status. More...
 
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations. More...
 
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations. More...
 
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map. More...
 
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property. More...
 
void insertQueryFile (const std::string &path)
 Insert query variables states from file. More...
 
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property. More...
 
Post-inference methods
Potential< GUM_SCALAR > marginalMin (const NodeId id) const
 Get the lower marginals of a given node id. More...
 
Potential< GUM_SCALAR > marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name. More...
 
Potential< GUM_SCALAR > marginalMax (const NodeId id) const
 Get the upper marginals of a given node id. More...
 
Potential< GUM_SCALAR > marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name. More...
 
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id. More...
 
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name. More...
 
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id. More...
 
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id. More...
 
void saveMarginals (const std::string &path) const
 Saves marginals to file. More...
 
void saveExpectations (const std::string &path) const
 Saves expectations to file. More...
 
void saveVertices (const std::string &path) const
 Saves vertices to file. More...
 
void dynamicExpectations ()
 Compute dynamic expectations. More...
 
std::string toString () const
 Print all nodes marginals to standart output. More...
 
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state. More...
 
Getters and setters
void setEpsilon (double eps)
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|. More...
 
double epsilon () const
 Returns the value of epsilon. More...
 
void disableEpsilon ()
 Disable stopping criterion on epsilon. More...
 
void enableEpsilon ()
 Enable stopping criterion on epsilon. More...
 
bool isEnabledEpsilon () const
 Returns true if stopping criterion on epsilon is enabled, false otherwise. More...
 
void setMinEpsilonRate (double rate)
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|). More...
 
double minEpsilonRate () const
 Returns the value of the minimal epsilon rate. More...
 
void disableMinEpsilonRate ()
 Disable stopping criterion on epsilon rate. More...
 
void enableMinEpsilonRate ()
 Enable stopping criterion on epsilon rate. More...
 
bool isEnabledMinEpsilonRate () const
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise. More...
 
void setMaxIter (Size max)
 Stopping criterion on number of iterations. More...
 
Size maxIter () const
 Returns the criterion on number of iterations. More...
 
void disableMaxIter ()
 Disable stopping criterion on max iterations. More...
 
void enableMaxIter ()
 Enable stopping criterion on max iterations. More...
 
bool isEnabledMaxIter () const
 Returns true if stopping criterion on max iterations is enabled, false otherwise. More...
 
void setMaxTime (double timeout)
 Stopping criterion on timeout. More...
 
double maxTime () const
 Returns the timeout (in seconds). More...
 
double currentTime () const
 Returns the current running time in second. More...
 
void disableMaxTime ()
 Disable stopping criterion on timeout. More...
 
void enableMaxTime ()
 Enable stopping criterion on timeout. More...
 
bool isEnabledMaxTime () const
 Returns true if stopping criterion on timeout is enabled, false otherwise. More...
 
void setPeriodSize (Size p)
 How many samples between two stopping is enable. More...
 
Size periodSize () const
 Returns the period size. More...
 
void setVerbosity (bool v)
 Set the verbosity on (true) or off (false). More...
 
bool verbosity () const
 Returns true if verbosity is enabled. More...
 
ApproximationSchemeSTATE stateApproximationScheme () const
 Returns the approximation scheme state. More...
 
Size nbrIterations () const
 Returns the number of iterations. More...
 
const std::vector< double > & history () const
 Returns the scheme history. More...
 
void initApproximationScheme ()
 Initialise the scheme. More...
 
bool startOfPeriod ()
 Returns true if we are at the beginning of a period (compute error is mandatory). More...
 
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps. More...
 
Size remainingBurnIn ()
 Returns the remaining burn in. More...
 
void stopApproximationScheme ()
 Stop the approximation scheme. More...
 
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error. More...
 
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message. More...
 

Public Types

enum  ApproximationSchemeSTATE : char {
  ApproximationSchemeSTATE::Undefined, ApproximationSchemeSTATE::Continue, ApproximationSchemeSTATE::Epsilon, ApproximationSchemeSTATE::Rate,
  ApproximationSchemeSTATE::Limit, ApproximationSchemeSTATE::TimeLimit, ApproximationSchemeSTATE::Stopped
}
 The different state of an approximation scheme. More...
 

Protected Attributes

bool repetitiveInd_
 
_margis_ l_marginalMin_
 Threads lower marginals, one per thread. More...
 
_margis_ l_marginalMax_
 Threads upper marginals, one per thread. More...
 
_expes_ l_expectationMin_
 Threads lower expectations, one per thread. More...
 
_expes_ l_expectationMax_
 Threads upper expectations, one per thread. More...
 
_modals_ l_modal_
 Threads modalities. More...
 
_credalSets_ l_marginalSets_
 Threads vertices. More...
 
_margis_ l_evidence_
 Threads evidence. More...
 
_clusters_ l_clusters_
 Threads clusters. More...
 
std::vector< _bnet_ *> workingSet_
 Threads IBayesNet. More...
 
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
 Threads evidence. More...
 
std::vector< BNInferenceEngine *> l_inferenceEngine_
 Threads BNInferenceEngine. More...
 
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
 Threads optimal IBayesNet. More...
 
const CredalNet< GUM_SCALAR > * credalNet_
 A pointer to the Credal Net used. More...
 
margi oldMarginalMin_
 Old lower marginals used to compute epsilon. More...
 
margi oldMarginalMax_
 Old upper marginals used to compute epsilon. More...
 
margi marginalMin_
 Lower marginals. More...
 
margi marginalMax_
 Upper marginals. More...
 
credalSet marginalSets_
 Credal sets vertices, if enabled. More...
 
expe expectationMin_
 Lower expectations, if some variables modalities were inserted. More...
 
expe expectationMax_
 Upper expectations, if some variables modalities were inserted. More...
 
dynExpe dynamicExpMin_
 Lower dynamic expectations. More...
 
dynExpe dynamicExpMax_
 Upper dynamic expectations. More...
 
dynExpe modal_
 Variables modalities used to compute expectations. More...
 
margi evidence_
 Holds observed variables states. More...
 
query query_
 Holds the query nodes states. More...
 
cluster t0_
 Clusters of nodes used with dynamic networks. More...
 
cluster t1_
 Clusters of nodes used with dynamic networks. More...
 
bool storeVertices_
 True if credal sets vertices are stored, False otherwise. More...
 
bool storeBNOpt_
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling. More...
 
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
 Object used to efficiently store optimal bayes net during inference, for some algorithms. More...
 
int timeSteps_
 The number of time steps of this network (only usefull for dynamic networks). More...
 
double current_epsilon_
 Current epsilon. More...
 
double last_epsilon_
 Last epsilon value. More...
 
double current_rate_
 Current rate. More...
 
Size current_step_
 The current step. More...
 
Timer timer_
 The timer. More...
 
ApproximationSchemeSTATE current_state_
 The current state. More...
 
std::vector< doublehistory_
 The scheme history, used only if verbosity == true. More...
 
double eps_
 Threshold for convergence. More...
 
bool enabled_eps_
 If true, the threshold convergence is enabled. More...
 
double min_rate_eps_
 Threshold for the epsilon rate. More...
 
bool enabled_min_rate_eps_
 If true, the minimal threshold for epsilon rate is enabled. More...
 
double max_time_
 The timeout. More...
 
bool enabled_max_time_
 If true, the timeout is enabled. More...
 
Size max_iter_
 The maximum iterations. More...
 
bool enabled_max_iter_
 If true, the maximum iterations stopping criterion is enabled. More...
 
Size burn_in_
 Number of iterations before checking stopping criteria. More...
 
Size period_size_
 Checking criteria frequency. More...
 
bool verbosity_
 If true, verbosity is enabled. More...
 

Protected Member Functions

Protected initialization methods

Fusion of threads optimal IBayesNet.

void initThreadsData_ (const Size &num_threads, const bool _storeVertices_, const bool _storeBNOpt_)
 Initialize threads data. More...
 
Protected algorithms methods
bool updateThread_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id. More...
 
void updateMarginals_ ()
 Fusion of threads marginals. More...
 
const GUM_SCALAR computeEpsilon_ ()
 Compute epsilon and update old marginals. More...
 
void updateOldMarginals_ ()
 Update old marginals (from current marginals). More...
 
Proptected post-inference methods
void optFusion_ ()
 Fusion of threads optimal IBayesNet. More...
 
void expFusion_ ()
 Fusion of threads expectations. More...
 
void verticesFusion_ ()
 
Protected initialization methods
void repetitiveInit_ ()
 Initialize t0_ and t1_ clusters. More...
 
void initExpectations_ ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality. More...
 
void initMarginals_ ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0. More...
 
void initMarginalSets_ ()
 Initialize credal set vertices with empty sets. More...
 
Protected algorithms methods
void updateExpectations_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations. More...
 
void updateCredalSets_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set. More...
 
Proptected post-inference methods
void dynamicExpectations_ ()
 Rearrange lower and upper expectations to suit dynamic networks. More...
 

Detailed Description

template<typename GUM_SCALAR, class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
class gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >

<agrum/CN/CNMonteCarloSampling.h>

Inference by basic sampling algorithm (pure random) of bnet in credal networks.

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
BNInferenceEngineA IBayesNet inference engine such as LazyPropagation ( recommanded ).
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN()
Warning
p(e) must be available ( by a call to my_BNInferenceEngine.evidenceMarginal() ) !! the vertices are correct if p(e) > 0 for a sample the test is made once

Definition at line 60 of file CNMonteCarloSampling.h.

Member Typedef Documentation

◆ _infEs_

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
using gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_infEs_ = MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >
private

To easily acces MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine

methods.

Definition at line 65 of file CNMonteCarloSampling.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 64 of file IApproximationSchemeConfiguration.h.

64  : char
65  {
66  Undefined,
67  Continue,
68  Epsilon,
69  Rate,
70  Limit,
71  TimeLimit,
72  Stopped
73  };

Constructor & Destructor Documentation

◆ CNMonteCarloSampling()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling ( const CredalNet< GUM_SCALAR > &  credalNet)
explicit

Constructor.

Parameters
credalNetThe CredalNet to be used by the algorithm.

Definition at line 29 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling().

30  :
32  credalNet) {
34  // __infEs::iterStop_ = 1000;
36  _infEs_::storeBNOpt_ = false;
37 
38  this->setMaxTime(60);
39  this->enableMaxTime();
40 
42  this->setPeriodSize(1000);
43 
44  GUM_CONSTRUCTOR(CNMonteCarloSampling);
45  }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
void setPeriodSize(Size p)
How many samples between two stopping is enable.
MultipleInferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.
void setMaxTime(double timeout)
Stopping criterion on timeout.
CNMonteCarloSampling(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.
const CredalNet< GUM_SCALAR > & credalNet() const
Get this creadal network.
void enableMaxTime()
Enable stopping criterion on timeout.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ ~CNMonteCarloSampling()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::~CNMonteCarloSampling ( )
virtual

Destructor.

Definition at line 48 of file CNMonteCarloSampling_tpl.h.

48  {
49  GUM_DESTRUCTOR(CNMonteCarloSampling);
50  }
CNMonteCarloSampling(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.

Member Function Documentation

◆ _binaryRep_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_binaryRep_ ( std::vector< bool > &  toFill,
const Idx  value 
) const
inlineprivate

Get the binary representation of a given value.

Parameters
toFillA reference to the bits to fill. Size must be correct before passing argument (i.e. big enough to represent value)
valueThe constant integer we want to binarize.

Definition at line 273 of file CNMonteCarloSampling_tpl.h.

275  {
276  Idx n = value;
277  auto tfsize = toFill.size();
278 
279  // get bits of choosen_vertex
280  for (decltype(tfsize) i = 0; i < tfsize; i++) {
281  toFill[i] = n & 1;
282  n /= 2;
283  }
284  }

◆ _insertEvidence_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_insertEvidence_ ( )
inlineprivate

Insert CredalNet evidence into a thread BNInferenceEngine.

Definition at line 392 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadInference_().

392  {
393  if (this->evidence_.size() == 0) { return; }
394 
395  int this_thread = getThreadNumber();
396 
397  BNInferenceEngine* inference_engine = this->l_inferenceEngine_[this_thread];
398 
399  IBayesNet< GUM_SCALAR >* working_bn = this->workingSet_[this_thread];
400 
401  List< const Potential< GUM_SCALAR >* >* evi_list = this->workingSetE_[this_thread];
402 
403  if (evi_list->size() > 0) {
404  for (const auto pot: *evi_list)
405  inference_engine->addEvidence(*pot);
406  return;
407  }
408 
409  for (const auto& elt: this->evidence_) {
410  Potential< GUM_SCALAR >* p = new Potential< GUM_SCALAR >;
411  (*p) << working_bn->variable(elt.first);
412 
413  try {
414  p->fillWith(elt.second);
415  } catch (Exception& err) {
416  GUM_SHOWERROR(err);
417  throw(err);
418  }
419 
420  evi_list->insert(p);
421  }
422 
423  if (evi_list->size() > 0) {
424  for (const auto pot: *evi_list)
425  inference_engine->addEvidence(*pot);
426  }
427  }
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
Size size() const noexcept
Returns the number of elements stored into the hashtable.
unsigned int getThreadNumber()
Get the calling thread id.
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
margi evidence_
Holds observed variables states.
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
+ Here is the caller graph for this function:

◆ _mcInitApproximationScheme_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcInitApproximationScheme_ ( )
private

Initialize approximation Scheme.

Definition at line 188 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

188  {
189  // this->setEpsilon ( std::numeric_limits< GUM_SCALAR >::min() );
193  this->setEpsilon(0.);
194  this->enableEpsilon(); // to be sure
195 
196  this->disableMinEpsilonRate();
197  this->disableMaxIter();
198 
199  this->initApproximationScheme();
200  }
void disableMinEpsilonRate()
Disable stopping criterion on epsilon rate.
void initApproximationScheme()
Initialise the scheme.
void setEpsilon(double eps)
Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.
void disableMaxIter()
Disable stopping criterion on max iterations.
void enableEpsilon()
Enable stopping criterion on epsilon.
+ Here is the caller graph for this function:

◆ _mcThreadDataCopy_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_ ( )
private

Initialize threads data.

Definition at line 203 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

203  {
204  int num_threads;
205 #pragma omp parallel
206  {
207  int this_thread = getThreadNumber();
208 
209 // implicit wait clause (don't put nowait)
210 #pragma omp single
211  {
212  // should we ask for max threads instead ( no differences here in
213  // practice
214  // )
215  num_threads = getNumberOfRunningThreads();
216 
218  this->l_inferenceEngine_.resize(num_threads, nullptr);
219 
220  // if ( _infEs_::storeBNOpt_ )
221  // this->l_sampledNet_.resize ( num_threads );
222  } // end of : single region
223 
224  // we could put those below in a function in InferenceEngine, but let's
225  // keep
226  // this parallel region instead of breaking it and making another one to
227  // do
228  // the same stuff in 2 places since :
229  // !!! BNInferenceEngine still needs to be initialized here anyway !!!
230 
231  BayesNet< GUM_SCALAR >* thread_bn = new BayesNet< GUM_SCALAR >();
232 #pragma omp critical(Init)
233  {
234  // IBayesNet< GUM_SCALAR > * thread_bn = new IBayesNet< GUM_SCALAR
235  // >();//(this->credalNet_->current_bn());
236  *thread_bn = this->credalNet_->current_bn();
237  }
238  this->workingSet_[this_thread] = thread_bn;
239 
240  this->l_marginalMin_[this_thread] = this->marginalMin_;
241  this->l_marginalMax_[this_thread] = this->marginalMax_;
242  this->l_expectationMin_[this_thread] = this->expectationMin_;
243  this->l_expectationMax_[this_thread] = this->expectationMax_;
244  this->l_modal_[this_thread] = this->modal_;
245 
246  _infEs_::l_clusters_[this_thread].resize(2);
247  _infEs_::l_clusters_[this_thread][0] = _infEs_::t0_;
248  _infEs_::l_clusters_[this_thread][1] = _infEs_::t1_;
249 
250  if (_infEs_::storeVertices_) { this->l_marginalSets_[this_thread] = this->marginalSets_; }
251 
252  List< const Potential< GUM_SCALAR >* >* evi_list
253  = new List< const Potential< GUM_SCALAR >* >();
254  this->workingSetE_[this_thread] = evi_list;
255 
256  // #TODO: the next instruction works only for lazy propagation.
257  // => find a way to remove the second argument
258  BNInferenceEngine* inference_engine
259  = new BNInferenceEngine((this->workingSet_[this_thread]),
261 
262  this->l_inferenceEngine_[this_thread] = inference_engine;
263 
264  if (_infEs_::storeBNOpt_) {
265  VarMod2BNsMap< GUM_SCALAR >* threadOpt
266  = new VarMod2BNsMap< GUM_SCALAR >(*this->credalNet_);
267  this->l_optimalNet_[this_thread] = threadOpt;
268  }
269  }
270  }
_credalSets_ l_marginalSets_
Threads vertices.
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
unsigned int getNumberOfRunningThreads()
Get the current number of running threads.
_clusters_ l_clusters_
Threads clusters.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
cluster t1_
Clusters of nodes used with dynamic networks.
cluster t0_
Clusters of nodes used with dynamic networks.
credalSet marginalSets_
Credal sets vertices, if enabled.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
dynExpe modal_
Variables modalities used to compute expectations.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
void initThreadsData_(const Size &num_threads, const bool _storeVertices_, const bool _storeBNOpt_)
Initialize threads data.
+ Here is the caller graph for this function:

◆ _threadInference_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadInference_ ( )
inlineprivate

Thread performs an inference using BNInferenceEngine.

Calls verticesSampling and insertEvidence.

Definition at line 178 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_insertEvidence_(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_verticesSampling_().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

178  {
179  int tId = getThreadNumber();
181 
182  this->l_inferenceEngine_[tId]->eraseAllEvidence();
184  this->l_inferenceEngine_[tId]->makeInference();
185  }
unsigned int getThreadNumber()
Get the calling thread id.
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
void _verticesSampling_()
Thread samples a IBayesNet from the CredalNet.
void _insertEvidence_()
Insert CredalNet evidence into a thread BNInferenceEngine.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _threadUpdate_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadUpdate_ ( )
inlineprivate

Update thread data after a IBayesNet inference.

Definition at line 151 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

151  {
152  int tId = getThreadNumber();
153  // bool keepSample = false;
154 
155  if (this->l_inferenceEngine_[tId]->evidenceProbability() > 0) {
156  const DAG& tDag = this->workingSet_[tId]->dag();
157 
158  for (auto node: tDag.nodes()) {
159  const Potential< GUM_SCALAR >& potential(this->l_inferenceEngine_[tId]->posterior(node));
160  Instantiation ins(potential);
161  std::vector< GUM_SCALAR > vertex;
162 
163  for (ins.setFirst(); !ins.end(); ++ins) {
164  vertex.push_back(potential[ins]);
165  }
166 
167  // true for redundancy elimination of node it credal set
168  // but since global marginals are only updated at the end of each
169  // period of
170  // approximationScheme, it is "useless" ( and expensive ) to check now
171  this->updateThread_(node, vertex, false);
172 
173  } // end of : for all nodes
174  } // end of : if ( p(e) > 0 )
175  }
unsigned int getThreadNumber()
Get the calling thread id.
bool updateThread_(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id...
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
+ Here is the caller graph for this function:

◆ _verticesSampling_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_verticesSampling_ ( )
inlineprivate

Thread samples a IBayesNet from the CredalNet.

Definition at line 287 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadInference_().

287  {
288  int this_thread = getThreadNumber();
289  IBayesNet< GUM_SCALAR >* working_bn = this->workingSet_[this_thread];
290 
291  const auto cpt = &this->credalNet_->credalNet_currentCpt();
292 
293  using dBN = std::vector< std::vector< std::vector< bool > > >;
294 
295  dBN sample;
296 
297  if (_infEs_::storeBNOpt_) { sample = dBN(this->l_optimalNet_[this_thread]->getSampleDef()); }
298 
300  const auto& t0 = _infEs_::l_clusters_[this_thread][0];
301  const auto& t1 = _infEs_::l_clusters_[this_thread][1];
302 
303  for (const auto& elt: t0) {
304  auto dSize = working_bn->variable(elt.first).domainSize();
305  Potential< GUM_SCALAR >* potential(
306  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.first)));
307  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
308 
309  Size pconfs = Size((*cpt)[elt.first].size());
310 
311  for (Size pconf = 0; pconf < pconfs; pconf++) {
312  Size choosen_vertex = rand() % (*cpt)[elt.first][pconf].size();
313 
314  if (_infEs_::storeBNOpt_) { _binaryRep_(sample[elt.first][pconf], choosen_vertex); }
315 
316  for (Size mod = 0; mod < dSize; mod++) {
317  var_cpt[pconf * dSize + mod] = (*cpt)[elt.first][pconf][choosen_vertex][mod];
318  }
319  } // end of : pconf
320 
321  potential->fillWith(var_cpt);
322 
323  Size t0esize = Size(elt.second.size());
324 
325  for (Size pos = 0; pos < t0esize; pos++) {
326  if (_infEs_::storeBNOpt_) { sample[elt.second[pos]] = sample[elt.first]; }
327 
328  Potential< GUM_SCALAR >* potential2(
329  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.second[pos])));
330  potential2->fillWith(var_cpt);
331  }
332  }
333 
334  for (const auto& elt: t1) {
335  auto dSize = working_bn->variable(elt.first).domainSize();
336  Potential< GUM_SCALAR >* potential(
337  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.first)));
338  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
339 
340  for (Size pconf = 0; pconf < (*cpt)[elt.first].size(); pconf++) {
341  Idx choosen_vertex = Idx(rand() % (*cpt)[elt.first][pconf].size());
342 
343  if (_infEs_::storeBNOpt_) { _binaryRep_(sample[elt.first][pconf], choosen_vertex); }
344 
345  for (decltype(dSize) mod = 0; mod < dSize; mod++) {
346  var_cpt[pconf * dSize + mod] = (*cpt)[elt.first][pconf][choosen_vertex][mod];
347  }
348  } // end of : pconf
349 
350  potential->fillWith(var_cpt);
351 
352  auto t1esize = elt.second.size();
353 
354  for (decltype(t1esize) pos = 0; pos < t1esize; pos++) {
355  if (_infEs_::storeBNOpt_) { sample[elt.second[pos]] = sample[elt.first]; }
356 
357  Potential< GUM_SCALAR >* potential2(
358  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.second[pos])));
359  potential2->fillWith(var_cpt);
360  }
361  }
362 
363  if (_infEs_::storeBNOpt_) { this->l_optimalNet_[this_thread]->setCurrentSample(sample); }
364  } else {
365  for (auto node: working_bn->nodes()) {
366  auto dSize = working_bn->variable(node).domainSize();
367  Potential< GUM_SCALAR >* potential(
368  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(node)));
369  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
370 
371  auto pConfs = (*cpt)[node].size();
372 
373  for (decltype(pConfs) pconf = 0; pconf < pConfs; pconf++) {
374  Size nVertices = Size((*cpt)[node][pconf].size());
375  Idx choosen_vertex = Idx(rand() % nVertices);
376 
377  if (_infEs_::storeBNOpt_) { _binaryRep_(sample[node][pconf], choosen_vertex); }
378 
379  for (decltype(dSize) mod = 0; mod < dSize; mod++) {
380  var_cpt[pconf * dSize + mod] = (*cpt)[node][pconf][choosen_vertex][mod];
381  }
382  } // end of : pconf
383 
384  potential->fillWith(var_cpt);
385  }
386 
387  if (_infEs_::storeBNOpt_) { this->l_optimalNet_[this_thread]->setCurrentSample(sample); }
388  }
389  }
_clusters_ l_clusters_
Threads clusters.
unsigned int getThreadNumber()
Get the calling thread id.
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
void _binaryRep_(std::vector< bool > &toFill, const Idx value) const
Get the binary representation of a given value.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
Size Idx
Type for indexes.
Definition: types.h:52
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
+ Here is the caller graph for this function:

◆ computeEpsilon_()

template<typename GUM_SCALAR , class BNInferenceEngine >
const GUM_SCALAR gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::computeEpsilon_ ( )
inlineprotectedinherited

Compute epsilon and update old marginals.

Returns
Epsilon.

Definition at line 296 of file multipleInferenceEngine_tpl.h.

296  {
297  GUM_SCALAR eps = 0;
298 #pragma omp parallel
299  {
300  GUM_SCALAR tEps = 0;
301  GUM_SCALAR delta;
302 
303  int tId = getThreadNumber();
304  long nsize = long(workingSet_[tId]->size());
305 
306 #pragma omp for
307 
308  for (long i = 0; i < nsize; i++) {
309  Size dSize = Size(l_marginalMin_[tId][i].size());
310 
311  for (Size j = 0; j < dSize; j++) {
312  // on min
313  delta = this->marginalMin_[i][j] - this->oldMarginalMin_[i][j];
314  delta = (delta < 0) ? (-delta) : delta;
315  tEps = (tEps < delta) ? delta : tEps;
316 
317  // on max
318  delta = this->marginalMax_[i][j] - this->oldMarginalMax_[i][j];
319  delta = (delta < 0) ? (-delta) : delta;
320  tEps = (tEps < delta) ? delta : tEps;
321 
322  this->oldMarginalMin_[i][j] = this->marginalMin_[i][j];
323  this->oldMarginalMax_[i][j] = this->marginalMax_[i][j];
324  }
325  } // end of : all variables
326 
327 #pragma omp critical(epsilon_max)
328  {
329 #pragma omp flush(eps)
330  eps = (eps < tEps) ? tEps : eps;
331  }
332 
333  } // end of : parallel region
334  return eps;
335  }
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double  error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 208 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

208  {
209  // For coherence, we fix the time used in the method
210 
211  double timer_step = timer_.step();
212 
213  if (enabled_max_time_) {
214  if (timer_step > max_time_) {
216  return false;
217  }
218  }
219 
220  if (!startOfPeriod()) { return true; }
221 
223  GUM_ERROR(OperationNotAllowed,
224  "state of the approximation scheme is not correct : "
226  }
227 
228  if (verbosity()) { history_.push_back(error); }
229 
230  if (enabled_max_iter_) {
231  if (current_step_ > max_iter_) {
233  return false;
234  }
235  }
236 
238  current_epsilon_ = error; // eps rate isEnabled needs it so affectation was
239  // moved from eps isEnabled below
240 
241  if (enabled_eps_) {
242  if (current_epsilon_ <= eps_) {
244  return false;
245  }
246  }
247 
248  if (last_epsilon_ >= 0.) {
249  if (current_epsilon_ > .0) {
250  // ! current_epsilon_ can be 0. AND epsilon
251  // isEnabled can be disabled !
253  }
254  // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
255  // infinity the else means a return false if we isEnabled the rate below,
256  // as we would have returned false if epsilon isEnabled was enabled
257  else {
259  }
260 
261  if (enabled_min_rate_eps_) {
262  if (current_rate_ <= min_rate_eps_) {
264  return false;
265  }
266  }
267  }
268 
270  if (onProgress.hasListener()) {
272  }
273 
274  return true;
275  } else {
276  return false;
277  }
278  }
double max_time_
The timeout.
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:41
Signaler3< Size, double, double > onProgress
Progression, error and time.
ApproximationSchemeSTATE current_state_
The current state.
void stopScheme_(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
bool startOfPeriod()
Returns true if we are at the beginning of a period (compute error is mandatory). ...
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
double last_epsilon_
Last epsilon value.
double eps_
Threshold for convergence.
double min_rate_eps_
Threshold for the epsilon rate.
bool enabled_max_time_
If true, the timeout is enabled.
double current_rate_
Current rate.
Size max_iter_
The maximum iterations.
double current_epsilon_
Current epsilon.
bool enabled_eps_
If true, the threshold convergence is enabled.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
std::vector< double > history_
The scheme history, used only if verbosity == true.
bool verbosity() const
Returns true if verbosity is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
Size current_step_
The current step.
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition: signaler3.h:41
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ credalNet()

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( ) const
inherited

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 57 of file inferenceEngine_tpl.h.

57  {
58  return *credalNet_;
59  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
virtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 115 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

115 { return timer_.step(); }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:41
+ Here is the call graph for this function:

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
virtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 53 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

53 { enabled_eps_ = false; }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
virtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 94 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

94 { enabled_max_iter_ = false; }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
virtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 118 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

118 { enabled_max_time_ = false; }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
virtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 74 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

74 { enabled_min_rate_eps_ = false; }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )
inherited

Compute dynamic expectations.

See also
dynamicExpectations_ Only call this if an algorithm does not call it by itself.

Definition at line 699 of file inferenceEngine_tpl.h.

699  {
701  }
void dynamicExpectations_()
Rearrange lower and upper expectations to suit dynamic networks.

◆ dynamicExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations_ ( )
protectedinherited

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 704 of file inferenceEngine_tpl.h.

704  {
705  // no modals, no expectations computed during inference
706  if (expectationMin_.empty() || modal_.empty()) return;
707 
708  // already called by the algorithm or the user
709  if (dynamicExpMax_.size() > 0 && dynamicExpMin_.size() > 0) return;
710 
711  // typedef typename std::map< int, GUM_SCALAR > innerMap;
712  using innerMap = typename gum::HashTable< int, GUM_SCALAR >;
713 
714  // typedef typename std::map< std::string, innerMap > outerMap;
715  using outerMap = typename gum::HashTable< std::string, innerMap >;
716 
717  // typedef typename std::map< std::string, std::vector< GUM_SCALAR > >
718  // mod;
719 
720  // si non dynamique, sauver directement expectationMin_ et Max (revient au
721  // meme
722  // mais plus rapide)
723  outerMap expectationsMin, expectationsMax;
724 
725  for (const auto& elt: expectationMin_) {
726  std::string var_name, time_step;
727 
728  var_name = credalNet_->current_bn().variable(elt.first).name();
729  auto delim = var_name.find_first_of("_");
730  time_step = var_name.substr(delim + 1, var_name.size());
731  var_name = var_name.substr(0, delim);
732 
733  // to be sure (don't store not monitored variables' expectations)
734  // although it
735  // should be taken care of before this point
736  if (!modal_.exists(var_name)) continue;
737 
738  expectationsMin.getWithDefault(var_name, innerMap())
739  .getWithDefault(atoi(time_step.c_str()), 0)
740  = elt.second; // we iterate with min iterators
741  expectationsMax.getWithDefault(var_name, innerMap())
742  .getWithDefault(atoi(time_step.c_str()), 0)
743  = expectationMax_[elt.first];
744  }
745 
746  for (const auto& elt: expectationsMin) {
747  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
748 
749  for (const auto& elt2: elt.second)
750  dynExp[elt2.first] = elt2.second;
751 
752  dynamicExpMin_.insert(elt.first, dynExp);
753  }
754 
755  for (const auto& elt: expectationsMax) {
756  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
757 
758  for (const auto& elt2: elt.second) {
759  dynExp[elt2.first] = elt2.second;
760  }
761 
762  dynamicExpMax_.insert(elt.first, dynExp);
763  }
764  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
The class for generic Hash Tables.
Definition: hashTable.h:666
dynExpe modal_
Variables modalities used to compute expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ dynamicExpMax()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string &  varName) const
inherited

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 497 of file inferenceEngine_tpl.h.

497  {
498  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
499  "GUM_SCALAR >::dynamicExpMax ( const std::string & "
500  "varName ) const : ";
501 
502  if (dynamicExpMax_.empty())
503  GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
504 
505  if (!dynamicExpMax_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
506  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
507 
508  return dynamicExpMax_[varName];
509  }
dynExpe dynamicExpMax_
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ dynamicExpMin()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string &  varName) const
inherited

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 481 of file inferenceEngine_tpl.h.

481  {
482  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
483  "GUM_SCALAR >::dynamicExpMin ( const std::string & "
484  "varName ) const : ";
485 
486  if (dynamicExpMin_.empty())
487  GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
488 
489  if (!dynamicExpMin_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
490  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
491 
492  return dynamicExpMin_[varName];
493  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
virtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 56 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

56 { enabled_eps_ = true; }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
virtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 97 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

97 { enabled_max_iter_ = true; }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
virtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 121 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

121 { enabled_max_time_ = true; }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
virtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 77 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

77 { enabled_min_rate_eps_ = true; }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
virtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 50 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

50 { return eps_; }
double eps_
Threshold for convergence.
+ Here is the call graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence ( )
virtualinherited

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 517 of file multipleInferenceEngine_tpl.h.

517  {
519  Size tsize = Size(workingSet_.size());
520 
521  // delete pointers
522  for (Size bn = 0; bn < tsize; bn++) {
523  if (_infE_::storeVertices_) l_marginalSets_[bn].clear();
524 
525  if (workingSet_[bn] != nullptr) delete workingSet_[bn];
526 
528  if (l_inferenceEngine_[bn] != nullptr) delete l_optimalNet_[bn];
529 
530  if (this->workingSetE_[bn] != nullptr) {
531  for (const auto ev: *workingSetE_[bn])
532  delete ev;
533 
534  delete workingSetE_[bn];
535  }
536 
537  if (l_inferenceEngine_[bn] != nullptr) delete l_inferenceEngine_[bn];
538  }
539 
540  // this is important, those will be resized with the correct number of
541  // threads.
542 
543  workingSet_.clear();
544  workingSetE_.clear();
545  l_inferenceEngine_.clear();
546  l_optimalNet_.clear();
547 
548  l_marginalMin_.clear();
549  l_marginalMax_.clear();
550  l_expectationMin_.clear();
551  l_expectationMax_.clear();
552  l_modal_.clear();
553  l_marginalSets_.clear();
554  l_evidence_.clear();
555  l_clusters_.clear();
556  }
_credalSets_ l_marginalSets_
Threads vertices.
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
_clusters_ l_clusters_
Threads clusters.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
virtual void eraseAllEvidence()
Erase all inference related data to perform another one.

◆ expectationMax() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId  id) const
inherited

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 473 of file inferenceEngine_tpl.h.

473  {
474  try {
475  return expectationMax_[id];
476  } catch (NotFound& err) { throw(err); }
477  }
expe expectationMax_
Upper expectations, if some variables modalities were inserted.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string &  varName) const
inherited

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 459 of file inferenceEngine_tpl.h.

459  {
460  try {
461  return expectationMax_[credalNet_->current_bn().idFromName(varName)];
462  } catch (NotFound& err) { throw(err); }
463  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId  id) const
inherited

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 466 of file inferenceEngine_tpl.h.

466  {
467  try {
468  return expectationMin_[id];
469  } catch (NotFound& err) { throw(err); }
470  }
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string &  varName) const
inherited

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 451 of file inferenceEngine_tpl.h.

451  {
452  try {
453  return expectationMin_[credalNet_->current_bn().idFromName(varName)];
454  } catch (NotFound& err) { throw(err); }
455  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

◆ expFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::expFusion_ ( )
protectedinherited

Fusion of threads expectations.

Definition at line 400 of file multipleInferenceEngine_tpl.h.

400  {
401  // don't create threads if there are no modalities to compute expectations
402  if (this->modal_.empty()) return;
403 
404  // we can compute expectations from vertices of the final credal set
406 #pragma omp parallel
407  {
408  int threadId = getThreadNumber();
409 
410  if (!this->l_modal_[threadId].empty()) {
411  Size nsize = Size(workingSet_[threadId]->size());
412 
413 #pragma omp for
414 
415  for (long i = 0; i < long(nsize); i++) { // i needs to be signed (due to omp with
416  // visual c++ 15)
417  std::string var_name = workingSet_[threadId]->variable(i).name();
418  auto delim = var_name.find_first_of("_");
419  var_name = var_name.substr(0, delim);
420 
421  if (!l_modal_[threadId].exists(var_name)) continue;
422 
423  for (const auto& vertex: _infE_::marginalSets_[i]) {
424  GUM_SCALAR exp = 0;
425  Size vsize = Size(vertex.size());
426 
427  for (Size mod = 0; mod < vsize; mod++)
428  exp += vertex[mod] * l_modal_[threadId][var_name][mod];
429 
430  if (exp > _infE_::expectationMax_[i]) _infE_::expectationMax_[i] = exp;
431 
432  if (exp < _infE_::expectationMin_[i]) _infE_::expectationMin_[i] = exp;
433  }
434  } // end of : each variable parallel for
435  } // end of : if this thread has modals
436  } // end of parallel region
437  return;
438  }
439 
440 #pragma omp parallel
441  {
442  int threadId = getThreadNumber();
443 
444  if (!this->l_modal_[threadId].empty()) {
445  Size nsize = Size(workingSet_[threadId]->size());
446 #pragma omp for
447  for (long i = 0; i < long(nsize);
448  i++) { // long instead of Idx due to omp for visual C++15
449  std::string var_name = workingSet_[threadId]->variable(i).name();
450  auto delim = var_name.find_first_of("_");
451  var_name = var_name.substr(0, delim);
452 
453  if (!l_modal_[threadId].exists(var_name)) continue;
454 
455  Size tsize = Size(l_expectationMax_.size());
456 
457  for (Idx tId = 0; tId < tsize; tId++) {
458  if (l_expectationMax_[tId][i] > this->expectationMax_[i])
459  this->expectationMax_[i] = l_expectationMax_[tId][i];
460 
461  if (l_expectationMin_[tId][i] < this->expectationMin_[i])
462  this->expectationMin_[i] = l_expectationMin_[tId][i];
463  } // end of : each thread
464  } // end of : each variable
465  } // end of : if modals not empty
466  } // end of : parallel region
467  }
unsigned int getThreadNumber()
Get the calling thread id.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
credalSet marginalSets_
Credal sets vertices, if enabled.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
dynExpe modal_
Variables modalities used to compute expectations.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR >
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inlineinherited

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 500 of file inferenceEngine.h.

500 { return this->messageApproximationScheme(); }
std::string messageApproximationScheme() const
Returns the approximation scheme message.

◆ getT0Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const
inherited

Get the t0_ cluster.

Returns
A constant reference to the t0_ cluster.

Definition at line 976 of file inferenceEngine_tpl.h.

976  {
977  return t0_;
978  }
cluster t0_
Clusters of nodes used with dynamic networks.

◆ getT1Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const
inherited

Get the t1_ cluster.

Returns
A constant reference to the t1_ cluster.

Definition at line 982 of file inferenceEngine_tpl.h.

982  {
983  return t1_;
984  }
cluster t1_
Clusters of nodes used with dynamic networks.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )
inherited

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 138 of file inferenceEngine_tpl.h.

138  {
139  return &dbnOpt_;
140  }
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
virtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 157 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

157  {
159  GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is udefined")
160  }
161 
162  if (verbosity() == false) { GUM_ERROR(OperationNotAllowed, "No history when verbosity=false") }
163 
164  return history_;
165  }
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
std::vector< double > history_
The scheme history, used only if verbosity == true.
bool verbosity() const
Returns true if verbosity is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 168 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

168  {
170  current_step_ = 0;
172  history_.clear();
173  timer_.reset();
174  }
ApproximationSchemeSTATE current_state_
The current state.
void reset()
Reset the timer.
Definition: timer_inl.h:31
double current_rate_
Current rate.
double current_epsilon_
Current epsilon.
std::vector< double > history_
The scheme history, used only if verbosity == true.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ initExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initExpectations_ ( )
protectedinherited

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 678 of file inferenceEngine_tpl.h.

678  {
681 
682  if (modal_.empty()) return;
683 
684  for (auto node: credalNet_->current_bn().nodes()) {
685  std::string var_name, time_step;
686 
687  var_name = credalNet_->current_bn().variable(node).name();
688  auto delim = var_name.find_first_of("_");
689  var_name = var_name.substr(0, delim);
690 
691  if (!modal_.exists(var_name)) continue;
692 
693  expectationMin_.insert(node, modal_[var_name].back());
694  expectationMax_.insert(node, modal_[var_name].front());
695  }
696  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.

◆ initMarginals_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginals_ ( )
protectedinherited

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 646 of file inferenceEngine_tpl.h.

646  {
651 
652  for (auto node: credalNet_->current_bn().nodes()) {
653  auto dSize = credalNet_->current_bn().variable(node).domainSize();
654  marginalMin_.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
655  oldMarginalMin_.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
656 
657  marginalMax_.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
658  oldMarginalMax_.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
659  }
660  }
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
void clear()
Removes all the elements in the hash table.
margi marginalMax_
Upper marginals.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
margi marginalMin_
Lower marginals.

◆ initMarginalSets_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginalSets_ ( )
protectedinherited

Initialize credal set vertices with empty sets.

Definition at line 663 of file inferenceEngine_tpl.h.

663  {
665 
666  if (!storeVertices_) return;
667 
668  for (auto node: credalNet_->current_bn().nodes())
669  marginalSets_.insert(node, std::vector< std::vector< GUM_SCALAR > >());
670  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
credalSet marginalSets_
Credal sets vertices, if enabled.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.

◆ initThreadsData_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::initThreadsData_ ( const Size num_threads,
const bool  _storeVertices_,
const bool  _storeBNOpt_ 
)
inlineprotectedinherited

Initialize threads data.

Parameters
num_threadsThe number of threads.
<em>storeVertices</em>True if vertices should be stored, False otherwise.
<em>storeBNOpt</em>True if optimal IBayesNet should be stored, false otherwise.

Definition at line 41 of file multipleInferenceEngine_tpl.h.

44  {
45  workingSet_.clear();
46  workingSet_.resize(num_threads, nullptr);
47  workingSetE_.clear();
48  workingSetE_.resize(num_threads, nullptr);
49 
50  l_marginalMin_.clear();
51  l_marginalMin_.resize(num_threads);
52  l_marginalMax_.clear();
53  l_marginalMax_.resize(num_threads);
54  l_expectationMin_.clear();
55  l_expectationMin_.resize(num_threads);
56  l_expectationMax_.clear();
57  l_expectationMax_.resize(num_threads);
58 
59  l_clusters_.clear();
60  l_clusters_.resize(num_threads);
61 
62  if (_storeVertices_) {
63  l_marginalSets_.clear();
64  l_marginalSets_.resize(num_threads);
65  }
66 
67  if (_storeBNOpt_) {
68  for (Size ptr = 0; ptr < this->l_optimalNet_.size(); ptr++)
69  if (this->l_optimalNet_[ptr] != nullptr) delete l_optimalNet_[ptr];
70 
71  l_optimalNet_.clear();
72  l_optimalNet_.resize(num_threads);
73  }
74 
75  l_modal_.clear();
76  l_modal_.resize(num_threads);
77 
79  this->oldMarginalMin_ = this->marginalMin_;
80  this->oldMarginalMax_.clear();
81  this->oldMarginalMax_ = this->marginalMax_;
82  }
_credalSets_ l_marginalSets_
Threads vertices.
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
_clusters_ l_clusters_
Threads clusters.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
void clear()
Removes all the elements in the hash table.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > &  eviMap)
inherited

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 226 of file inferenceEngine_tpl.h.

227  {
228  if (!evidence_.empty()) evidence_.clear();
229 
230  for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
231  NodeId id;
232 
233  try {
234  id = credalNet_->current_bn().idFromName(it->first);
235  } catch (NotFound& err) {
236  GUM_SHOWERROR(err);
237  continue;
238  }
239 
240  evidence_.insert(id, it->second);
241  }
242  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi evidence_
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > &  evidence)
inherited

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 248 of file inferenceEngine_tpl.h.

249  {
250  if (!evidence_.empty()) evidence_.clear();
251 
252  // use cbegin() to get const_iterator when available in aGrUM hashtables
253  for (const auto& elt: evidence) {
254  try {
255  credalNet_->current_bn().variable(elt.first);
256  } catch (NotFound& err) {
257  GUM_SHOWERROR(err);
258  continue;
259  }
260 
261  evidence_.insert(elt.first, elt.second);
262  }
263  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi evidence_
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ insertEvidenceFile()

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
virtual void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::insertEvidenceFile ( const std::string &  path)
inlinevirtual

unsigned int notOptDelete;

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 124 of file CNMonteCarloSampling.h.

124  {
126  };
virtual void insertEvidenceFile(const std::string &path)
Insert evidence from file.

◆ insertModals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > &  modals)
inherited

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 190 of file inferenceEngine_tpl.h.

191  {
192  if (!modal_.empty()) modal_.clear();
193 
194  for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
195  NodeId id;
196 
197  try {
198  id = credalNet_->current_bn().idFromName(it->first);
199  } catch (NotFound& err) {
200  GUM_SHOWERROR(err);
201  continue;
202  }
203 
204  // check that modals are net compatible
205  auto dSize = credalNet_->current_bn().variable(id).domainSize();
206 
207  if (dSize != it->second.size()) continue;
208 
209  // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
210  // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
211  // > >
212  // &modals) : modalities does not respect variable cardinality : " <<
213  // credalNet_->current_bn().variable( id ).name() << " : " << dSize << "
214  // != "
215  // << it->second.size());
216 
217  modal_.insert(it->first, it->second); //[ it->first ] = it->second;
218  }
219 
220  //_modal = modals;
221 
223  }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
Size NodeId
Type for node ids.
Definition: graphElements.h:97

◆ insertModalsFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string &  path)
inherited

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 143 of file inferenceEngine_tpl.h.

143  {
144  std::ifstream mod_stream(path.c_str(), std::ios::in);
145 
146  if (!mod_stream.good()) {
147  GUM_ERROR(OperationNotAllowed,
148  "void InferenceEngine< GUM_SCALAR "
149  ">::insertModals(const std::string & path) : "
150  "could not open input file : "
151  << path);
152  }
153 
154  if (!modal_.empty()) modal_.clear();
155 
156  std::string line, tmp;
157  char * cstr, *p;
158 
159  while (mod_stream.good()) {
160  getline(mod_stream, line);
161 
162  if (line.size() == 0) continue;
163 
164  cstr = new char[line.size() + 1];
165  strcpy(cstr, line.c_str());
166 
167  p = strtok(cstr, " ");
168  tmp = p;
169 
170  std::vector< GUM_SCALAR > values;
171  p = strtok(nullptr, " ");
172 
173  while (p != nullptr) {
174  values.push_back(GUM_SCALAR(atof(p)));
175  p = strtok(nullptr, " ");
176  } // end of : line
177 
178  modal_.insert(tmp, values); //[tmp] = values;
179 
180  delete[] p;
181  delete[] cstr;
182  } // end of : file
183 
184  mod_stream.close();
185 
187  }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
dynExpe modal_
Variables modalities used to compute expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ insertQuery()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > &  query)
inherited

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 327 of file inferenceEngine_tpl.h.

328  {
329  if (!query_.empty()) query_.clear();
330 
331  for (const auto& elt: query) {
332  try {
333  credalNet_->current_bn().variable(elt.first);
334  } catch (NotFound& err) {
335  GUM_SHOWERROR(err);
336  continue;
337  }
338 
339  query_.insert(elt.first, elt.second);
340  }
341  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
query query_
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
NodeProperty< std::vector< bool > > query
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ insertQueryFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string &  path)
inherited

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 344 of file inferenceEngine_tpl.h.

344  {
345  std::ifstream evi_stream(path.c_str(), std::ios::in);
346 
347  if (!evi_stream.good()) {
348  GUM_ERROR(IOError,
349  "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
350  "std::string & path) : could not open input file : "
351  << path);
352  }
353 
354  if (!query_.empty()) query_.clear();
355 
356  std::string line, tmp;
357  char * cstr, *p;
358 
359  while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
360  getline(evi_stream, line);
361  }
362 
363  while (evi_stream.good()) {
364  getline(evi_stream, line);
365 
366  if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
367 
368  if (line.size() == 0) continue;
369 
370  cstr = new char[line.size() + 1];
371  strcpy(cstr, line.c_str());
372 
373  p = strtok(cstr, " ");
374  tmp = p;
375 
376  // if user input is wrong
377  NodeId node = -1;
378 
379  try {
380  node = credalNet_->current_bn().idFromName(tmp);
381  } catch (NotFound& err) {
382  GUM_SHOWERROR(err);
383  continue;
384  }
385 
386  auto dSize = credalNet_->current_bn().variable(node).domainSize();
387 
388  p = strtok(nullptr, " ");
389 
390  if (p == nullptr) {
391  query_.insert(node, std::vector< bool >(dSize, true));
392  } else {
393  std::vector< bool > values(dSize, false);
394 
395  while (p != nullptr) {
396  if ((Size)atoi(p) >= dSize)
397  GUM_ERROR(OutOfBounds,
398  "void InferenceEngine< GUM_SCALAR "
399  ">::insertQuery(const std::string & path) : "
400  "query modality is higher or equal to "
401  "cardinality");
402 
403  values[atoi(p)] = true;
404  p = strtok(nullptr, " ");
405  } // end of : line
406 
407  query_.insert(node, values);
408  }
409 
410  delete[] p;
411  delete[] cstr;
412  } // end of : file
413 
414  evi_stream.close();
415  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
query query_
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
virtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 60 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

60 { return enabled_eps_; }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
virtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 101 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

101 { return enabled_max_iter_; }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
virtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 125 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

125 { return enabled_max_time_; }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
virtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 81 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

81 { return enabled_min_rate_eps_; }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ makeInference()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference ( )
virtual

Starts the inference.

Implements gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >.

Definition at line 53 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcInitApproximationScheme_(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadInference_(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_threadUpdate_().

53  {
55  try {
56  this->repetitiveInit_();
57  } catch (InvalidArgument& err) {
58  GUM_SHOWERROR(err);
60  }
61  }
62 
63  // debug
65 
67 
69 
70  // don't put it after burnIn, it could stop with timeout : we want at
71  // least one
72  // burnIn and one periodSize
73  GUM_SCALAR eps = 1.; // to validate testSuite ?
74 
76  auto psize = this->periodSize();
77  /*
78 
79  auto remaining = this->remainingBurnIn();
80 
82  be
84  if ( remaining != 0 ) {
88  and
92  )
93  do {
94  eps = 0;
95 
96  int iters = int( ( remaining < psize ) ? remaining : psize );
97 
98  #pragma omp parallel for
99 
100  for ( int iter = 0; iter < iters; iter++ ) {
101  _threadInference_();
102  _threadUpdate_();
103  } // end of : parallel periodSize
104 
105  this->updateApproximationScheme( iters );
106 
108 
110 
111  remaining = this->remainingBurnIn();
112 
113  } while ( ( remaining > 0 ) && this->continueApproximationScheme( eps
114  ) );
115  }
116  */
117 
118  if (this->continueApproximationScheme(eps)) {
119  do {
120  eps = 0;
121 
122 // less overheads with high periodSize
123 #pragma omp parallel for
124 
125  for (int iter = 0; iter < int(psize); iter++) {
127  _threadUpdate_();
128  } // end of : parallel periodSize
129 
130  this->updateApproximationScheme(int(psize));
131 
132  this->updateMarginals_(); // fusion threads + update margi
133 
134  eps = this->computeEpsilon_(); // also updates oldMargi
135 
136  } while (this->continueApproximationScheme(eps));
137  }
138 
139  if (!this->modal_.empty()) { this->expFusion_(); }
140 
141  if (_infEs_::storeBNOpt_) { this->optFusion_(); }
142 
143  if (_infEs_::storeVertices_) { this->verticesFusion_(); }
144 
145  if (!this->modal_.empty()) {
146  this->dynamicExpectations_(); // work with any network
147  }
148  }
void _threadInference_()
Thread performs an inference using BNInferenceEngine.
#define GUM_SHOWERROR(e)
Definition: exceptions.h:57
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
void _mcThreadDataCopy_()
Initialize threads data.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
void _mcInitApproximationScheme_()
Initialize approximation Scheme.
dynExpe modal_
Variables modalities used to compute expectations.
Size periodSize() const
Returns the period size.
bool continueApproximationScheme(double error)
Update the scheme w.r.t the new error.
void expFusion_()
Fusion of threads expectations.
void repetitiveInit_()
Initialize t0_ and t1_ clusters.
void updateMarginals_()
Fusion of threads marginals.
void optFusion_()
Fusion of threads optimal IBayesNet.
const GUM_SCALAR computeEpsilon_()
Compute epsilon and update old marginals.
void dynamicExpectations_()
Rearrange lower and upper expectations to suit dynamic networks.
void _threadUpdate_()
Update thread data after a IBayesNet inference.
void updateApproximationScheme(unsigned int incr=1)
Update the scheme w.r.t the new error and increment steps.
+ Here is the call graph for this function:

◆ marginalMax() [1/2]

template<typename GUM_SCALAR >
gum::Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId  id) const
inherited

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 440 of file inferenceEngine_tpl.h.

440  {
441  try {
442  Potential< GUM_SCALAR > res;
443  res.add(credalNet_->current_bn().variable(id));
444  res.fillWith(marginalMax_[id]);
445  return res;
446  } catch (NotFound& err) { throw(err); }
447  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi marginalMax_
Upper marginals.

◆ marginalMax() [2/2]

template<typename GUM_SCALAR >
INLINE Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string &  varName) const
inherited

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 425 of file inferenceEngine_tpl.h.

425  {
426  return marginalMax(credalNet_->current_bn().idFromName(varName));
427  }
Potential< GUM_SCALAR > marginalMax(const NodeId id) const
Get the upper marginals of a given node id.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.

◆ marginalMin() [1/2]

template<typename GUM_SCALAR >
gum::Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId  id) const
inherited

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 430 of file inferenceEngine_tpl.h.

430  {
431  try {
432  Potential< GUM_SCALAR > res;
433  res.add(credalNet_->current_bn().variable(id));
434  res.fillWith(marginalMin_[id]);
435  return res;
436  } catch (NotFound& err) { throw(err); }
437  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi marginalMin_
Lower marginals.

◆ marginalMin() [2/2]

template<typename GUM_SCALAR >
INLINE Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string &  varName) const
inherited

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 419 of file inferenceEngine_tpl.h.

419  {
420  return marginalMin(credalNet_->current_bn().idFromName(varName));
421  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
Potential< GUM_SCALAR > marginalMin(const NodeId id) const
Get the lower marginals of a given node id.

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
virtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 91 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

91 { return max_iter_; }
Size max_iter_
The maximum iterations.
+ Here is the call graph for this function:

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
virtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 112 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

112 { return max_time_; }
double max_time_
The timeout.
+ Here is the call graph for this function:

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 38 of file IApproximationSchemeConfiguration_inl.h.

References gum::Set< Key, Alloc >::emplace().

38  {
39  std::stringstream s;
40 
41  switch (stateApproximationScheme()) {
43  s << "in progress";
44  break;
45 
47  s << "stopped with epsilon=" << epsilon();
48  break;
49 
51  s << "stopped with rate=" << minEpsilonRate();
52  break;
53 
55  s << "stopped with max iteration=" << maxIter();
56  break;
57 
59  s << "stopped with timeout=" << maxTime();
60  break;
61 
63  s << "stopped on request";
64  break;
65 
67  s << "undefined state";
68  break;
69  };
70 
71  return s.str();
72  }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double maxTime() const =0
Returns the timeout (in seconds).
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
+ Here is the call graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
virtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 71 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

71 { return min_rate_eps_; }
double min_rate_eps_
Threshold for the epsilon rate.
+ Here is the call graph for this function:

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
virtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 148 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

148  {
150  GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is undefined")
151  }
152 
153  return current_step_;
154  }
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
Size current_step_
The current step.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ optFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::optFusion_ ( )
protectedinherited

Fusion of threads optimal IBayesNet.

Definition at line 470 of file multipleInferenceEngine_tpl.h.

470  {
471  typedef std::vector< bool > dBN;
472 
473  Size nsize = Size(workingSet_[0]->size());
474 
475  // no parallel insert in hash-tables (OptBN)
476  for (Idx i = 0; i < nsize; i++) {
477  // we don't store anything for observed variables
478  if (_infE_::evidence_.exists(i)) continue;
479 
480  Size dSize = Size(l_marginalMin_[0][i].size());
481 
482  for (Size j = 0; j < dSize; j++) {
483  // go through all threads
484  std::vector< Size > keymin(3);
485  keymin[0] = i;
486  keymin[1] = j;
487  keymin[2] = 0;
488  std::vector< Size > keymax(keymin);
489  keymax[2] = 1;
490 
491  Size tsize = Size(l_marginalMin_.size());
492 
493  for (Size tId = 0; tId < tsize; tId++) {
494  if (l_marginalMin_[tId][i][j] == this->marginalMin_[i][j]) {
495  const std::vector< dBN* >& tOpts = l_optimalNet_[tId]->getBNOptsFromKey(keymin);
496  Size osize = Size(tOpts.size());
497 
498  for (Size bn = 0; bn < osize; bn++) {
499  _infE_::dbnOpt_.insert(*tOpts[bn], keymin);
500  }
501  }
502 
503  if (l_marginalMax_[tId][i][j] == this->marginalMax_[i][j]) {
504  const std::vector< dBN* >& tOpts = l_optimalNet_[tId]->getBNOptsFromKey(keymax);
505  Size osize = Size(tOpts.size());
506 
507  for (Size bn = 0; bn < osize; bn++) {
508  _infE_::dbnOpt_.insert(*tOpts[bn], keymax);
509  }
510  }
511  } // end of : all threads
512  } // end of : all modalities
513  } // end of : all variables
514  }
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
margi evidence_
Holds observed variables states.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
virtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 134 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

134 { return period_size_; }
Size period_size_
Checking criteria frequency.
+ Here is the call graph for this function:

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( )
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 191 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

191  {
192  if (burn_in_ > current_step_) {
193  return burn_in_ - current_step_;
194  } else {
195  return 0;
196  }
197  }
Size burn_in_
Number of iterations before checking stopping criteria.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ repetitiveInd()

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const
inherited

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 118 of file inferenceEngine_tpl.h.

118  {
119  return repetitiveInd_;
120  }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.

◆ repetitiveInit_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInit_ ( )
protectedinherited

Initialize t0_ and t1_ clusters.

Definition at line 767 of file inferenceEngine_tpl.h.

767  {
768  timeSteps_ = 0;
769  t0_.clear();
770  t1_.clear();
771 
772  // t = 0 vars belongs to t0_ as keys
773  for (auto node: credalNet_->current_bn().dag().nodes()) {
774  std::string var_name = credalNet_->current_bn().variable(node).name();
775  auto delim = var_name.find_first_of("_");
776 
777  if (delim > var_name.size()) {
778  GUM_ERROR(InvalidArgument,
779  "void InferenceEngine< GUM_SCALAR "
780  ">::repetitiveInit_() : the network does not "
781  "appear to be dynamic");
782  }
783 
784  std::string time_step = var_name.substr(delim + 1, 1);
785 
786  if (time_step.compare("0") == 0) t0_.insert(node, std::vector< NodeId >());
787  }
788 
789  // t = 1 vars belongs to either t0_ as member value or t1_ as keys
790  for (const auto& node: credalNet_->current_bn().dag().nodes()) {
791  std::string var_name = credalNet_->current_bn().variable(node).name();
792  auto delim = var_name.find_first_of("_");
793  std::string time_step = var_name.substr(delim + 1, var_name.size());
794  var_name = var_name.substr(0, delim);
795  delim = time_step.find_first_of("_");
796  time_step = time_step.substr(0, delim);
797 
798  if (time_step.compare("1") == 0) {
799  bool found = false;
800 
801  for (const auto& elt: t0_) {
802  std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
803  delim = var_0_name.find_first_of("_");
804  var_0_name = var_0_name.substr(0, delim);
805 
806  if (var_name.compare(var_0_name) == 0) {
807  const Potential< GUM_SCALAR >* potential(&credalNet_->current_bn().cpt(node));
808  const Potential< GUM_SCALAR >* potential2(&credalNet_->current_bn().cpt(elt.first));
809 
810  if (potential->domainSize() == potential2->domainSize())
811  t0_[elt.first].push_back(node);
812  else
813  t1_.insert(node, std::vector< NodeId >());
814 
815  found = true;
816  break;
817  }
818  }
819 
820  if (!found) { t1_.insert(node, std::vector< NodeId >()); }
821  }
822  }
823 
824  // t > 1 vars belongs to either t0_ or t1_ as member value
825  // remember timeSteps_
826  for (auto node: credalNet_->current_bn().dag().nodes()) {
827  std::string var_name = credalNet_->current_bn().variable(node).name();
828  auto delim = var_name.find_first_of("_");
829  std::string time_step = var_name.substr(delim + 1, var_name.size());
830  var_name = var_name.substr(0, delim);
831  delim = time_step.find_first_of("_");
832  time_step = time_step.substr(0, delim);
833 
834  if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
835  // keep max time_step
836  if (atoi(time_step.c_str()) > timeSteps_) timeSteps_ = atoi(time_step.c_str());
837 
838  std::string var_0_name;
839  bool found = false;
840 
841  for (const auto& elt: t0_) {
842  std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
843  delim = var_0_name.find_first_of("_");
844  var_0_name = var_0_name.substr(0, delim);
845 
846  if (var_name.compare(var_0_name) == 0) {
847  const Potential< GUM_SCALAR >* potential(&credalNet_->current_bn().cpt(node));
848  const Potential< GUM_SCALAR >* potential2(&credalNet_->current_bn().cpt(elt.first));
849 
850  if (potential->domainSize() == potential2->domainSize()) {
851  t0_[elt.first].push_back(node);
852  found = true;
853  break;
854  }
855  }
856  }
857 
858  if (!found) {
859  for (const auto& elt: t1_) {
860  std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
861  auto delim = var_0_name.find_first_of("_");
862  var_0_name = var_0_name.substr(0, delim);
863 
864  if (var_name.compare(var_0_name) == 0) {
865  const Potential< GUM_SCALAR >* potential(&credalNet_->current_bn().cpt(node));
866  const Potential< GUM_SCALAR >* potential2(&credalNet_->current_bn().cpt(elt.first));
867 
868  if (potential->domainSize() == potential2->domainSize()) {
869  t1_[elt.first].push_back(node);
870  break;
871  }
872  }
873  }
874  }
875  }
876  }
877  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
cluster t1_
Clusters of nodes used with dynamic networks.
cluster t0_
Clusters of nodes used with dynamic networks.
void clear()
Removes all the elements in the hash table.
int timeSteps_
The number of time steps of this network (only usefull for dynamic networks).
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ saveExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string &  path) const
inherited

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 542 of file inferenceEngine_tpl.h.

542  {
543  if (dynamicExpMin_.empty()) //_modal.empty())
544  return;
545 
546  // else not here, to keep the const (natural with a saving process)
547  // else if(dynamicExpMin_.empty() || dynamicExpMax_.empty())
548  //_dynamicExpectations(); // works with or without a dynamic network
549 
550  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
551 
552  if (!m_stream.good()) {
553  GUM_ERROR(IOError,
554  "void InferenceEngine< GUM_SCALAR "
555  ">::saveExpectations(const std::string & path) : could "
556  "not open output file : "
557  << path);
558  }
559 
560  for (const auto& elt: dynamicExpMin_) {
561  m_stream << elt.first; // it->first;
562 
563  // iterates over a vector
564  for (const auto& elt2: elt.second) {
565  m_stream << " " << elt2;
566  }
567 
568  m_stream << std::endl;
569  }
570 
571  for (const auto& elt: dynamicExpMax_) {
572  m_stream << elt.first;
573 
574  // iterates over a vector
575  for (const auto& elt2: elt.second) {
576  m_stream << " " << elt2;
577  }
578 
579  m_stream << std::endl;
580  }
581 
582  m_stream.close();
583  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ saveMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string &  path) const
inherited

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 518 of file inferenceEngine_tpl.h.

518  {
519  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
520 
521  if (!m_stream.good()) {
522  GUM_ERROR(IOError,
523  "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
524  "std::string & path) const : could not open output file "
525  ": "
526  << path);
527  }
528 
529  for (const auto& elt: marginalMin_) {
530  Size esize = Size(elt.second.size());
531 
532  for (Size mod = 0; mod < esize; mod++) {
533  m_stream << credalNet_->current_bn().variable(elt.first).name() << " " << mod << " "
534  << (elt.second)[mod] << " " << marginalMax_[elt.first][mod] << std::endl;
535  }
536  }
537 
538  m_stream.close();
539  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ saveVertices()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string &  path) const
inherited

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 612 of file inferenceEngine_tpl.h.

612  {
613  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
614 
615  if (!m_stream.good()) {
616  GUM_ERROR(IOError,
617  "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
618  "std::string & path) : could not open outpul file : "
619  << path);
620  }
621 
622  for (const auto& elt: marginalSets_) {
623  m_stream << credalNet_->current_bn().variable(elt.first).name() << std::endl;
624 
625  for (const auto& elt2: elt.second) {
626  m_stream << "[";
627  bool first = true;
628 
629  for (const auto& elt3: elt2) {
630  if (!first) {
631  m_stream << ",";
632  first = false;
633  }
634 
635  m_stream << elt3;
636  }
637 
638  m_stream << "]\n";
639  }
640  }
641 
642  m_stream.close();
643  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
credalSet marginalSets_
Credal sets vertices, if enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double  eps)
virtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfLowerBoundRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 42 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

42  {
43  if (eps < 0.) { GUM_ERROR(OutOfLowerBound, "eps should be >=0") }
44 
45  eps_ = eps;
46  enabled_eps_ = true;
47  }
double eps_
Threshold for convergence.
bool enabled_eps_
If true, the threshold convergence is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size  max)
virtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfLowerBoundRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 84 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

84  {
85  if (max < 1) { GUM_ERROR(OutOfLowerBound, "max should be >=1") }
86  max_iter_ = max;
87  enabled_max_iter_ = true;
88  }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
Size max_iter_
The maximum iterations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double  timeout)
virtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfLowerBoundRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 105 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

105  {
106  if (timeout <= 0.) { GUM_ERROR(OutOfLowerBound, "timeout should be >0.") }
107  max_time_ = timeout;
108  enabled_max_time_ = true;
109  }
double max_time_
The timeout.
bool enabled_max_time_
If true, the timeout is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double  rate)
virtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfLowerBoundif rate<0

Implements gum::IApproximationSchemeConfiguration.

Definition at line 63 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

63  {
64  if (rate < 0) { GUM_ERROR(OutOfLowerBound, "rate should be >=0") }
65 
66  min_rate_eps_ = rate;
67  enabled_min_rate_eps_ = true;
68  }
double min_rate_eps_
Threshold for the epsilon rate.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size  p)
virtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfLowerBoundRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 128 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

128  {
129  if (p < 1) { GUM_ERROR(OutOfLowerBound, "p should be >=1") }
130 
131  period_size_ = p;
132  }
Size period_size_
Checking criteria frequency.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:51
+ Here is the call graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool  repetitive)
inherited
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 109 of file inferenceEngine_tpl.h.

109  {
110  bool oldValue = repetitiveInd_;
111  repetitiveInd_ = repetitive;
112 
113  // do not compute clusters more than once
114  if (repetitiveInd_ && !oldValue) repetitiveInit_();
115  }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.
void repetitiveInit_()
Initialize t0_ and t1_ clusters.

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool  v)
virtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 137 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

137 { verbosity_ = v; }
bool verbosity_
If true, verbosity is enabled.
+ Here is the call graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( )
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 178 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

178  {
179  if (current_step_ < burn_in_) { return false; }
180 
181  if (period_size_ == 1) { return true; }
182 
183  return ((current_step_ - burn_in_) % period_size_ == 0);
184  }
Size burn_in_
Number of iterations before checking stopping criteria.
Size period_size_
Checking criteria frequency.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
virtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 143 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

143  {
144  return current_state_;
145  }
ApproximationSchemeSTATE current_state_
The current state.
+ Here is the call graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 200 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

+ Here is the call graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool  value)
inherited
Parameters
valueTrue if optimal Bayesian networks are to be stored for each variable and each modality.

Definition at line 97 of file inferenceEngine_tpl.h.

97  {
98  storeBNOpt_ = value;
99  }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
inherited
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 133 of file inferenceEngine_tpl.h.

133  {
134  return storeBNOpt_;
135  }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool  value)
inherited
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 102 of file inferenceEngine_tpl.h.

102  {
103  storeVertices_ = value;
104 
105  if (value) initMarginalSets_();
106  }
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
void initMarginalSets_()
Initialize credal set vertices with empty sets.

◆ storeVertices() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const
inherited

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations.int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 128 of file inferenceEngine_tpl.h.

128  {
129  return storeVertices_;
130  }
bool storeVertices_
True if credal sets vertices are stored, False otherwise.

◆ toString()

template<typename GUM_SCALAR >
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const
inherited

Print all nodes marginals to standart output.

Definition at line 586 of file inferenceEngine_tpl.h.

586  {
587  std::stringstream output;
588  output << std::endl;
589 
590  // use cbegin() when available
591  for (const auto& elt: marginalMin_) {
592  Size esize = Size(elt.second.size());
593 
594  for (Size mod = 0; mod < esize; mod++) {
595  output << "P(" << credalNet_->current_bn().variable(elt.first).name() << "=" << mod
596  << "|e) = [ ";
597  output << marginalMin_[elt.first][mod] << ", " << marginalMax_[elt.first][mod] << " ]";
598 
599  if (!query_.empty())
600  if (query_.exists(elt.first) && query_[elt.first][mod]) output << " QUERY";
601 
602  output << std::endl;
603  }
604 
605  output << std::endl;
606  }
607 
608  return output.str();
609  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
query query_
Holds the query nodes states.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int  incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 187 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

187  {
188  current_step_ += incr;
189  }
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ updateCredalSets_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::updateCredalSets_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 902 of file inferenceEngine_tpl.h.

904  {
905  auto& nodeCredalSet = marginalSets_[id];
906  auto dsize = vertex.size();
907 
908  bool eq = true;
909 
910  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend(); it != itEnd; ++it) {
911  eq = true;
912 
913  for (Size i = 0; i < dsize; i++) {
914  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
915  eq = false;
916  break;
917  }
918  }
919 
920  if (eq) break;
921  }
922 
923  if (!eq || nodeCredalSet.size() == 0) {
924  nodeCredalSet.push_back(vertex);
925  return;
926  } else
927  return;
928 
929  // because of next lambda return condition
930  if (nodeCredalSet.size() == 1) return;
931 
932  // check that the point and all previously added ones are not inside the
933  // actual
934  // polytope
935  auto itEnd
936  = std::remove_if(nodeCredalSet.begin(),
937  nodeCredalSet.end(),
938  [&](const std::vector< GUM_SCALAR >& v) -> bool {
939  for (auto jt = v.cbegin(),
940  jtEnd = v.cend(),
941  minIt = marginalMin_[id].cbegin(),
942  minItEnd = marginalMin_[id].cend(),
943  maxIt = marginalMax_[id].cbegin(),
944  maxItEnd = marginalMax_[id].cend();
945  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
946  ++jt, ++minIt, ++maxIt) {
947  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
948  && std::fabs(*minIt - *maxIt) > 1e-6)
949  return false;
950  }
951  return true;
952  });
953 
954  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
955 
956  // we need at least 2 points to make a convex combination
957  if (!elimRedund || nodeCredalSet.size() <= 2) return;
958 
959  // there may be points not inside the polytope but on one of it's facet,
960  // meaning it's still a convex combination of vertices of this facet. Here
961  // we
962  // need lrs.
963  LRSWrapper< GUM_SCALAR > lrsWrapper;
964  lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
965 
966  for (const auto& vtx: nodeCredalSet)
967  lrsWrapper.fillV(vtx);
968 
969  lrsWrapper.elimRedundVrep();
970 
971  marginalSets_[id] = lrsWrapper.getOutput();
972  }
credalSet marginalSets_
Credal sets vertices, if enabled.
const const_iterator & cend() const noexcept
Returns the unsafe const_iterator pointing to the end of the hashtable.
const_iterator cbegin() const
Returns an unsafe const_iterator pointing to the beginning of the hashtable.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ updateExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::updateExpectations_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set

Definition at line 881 of file inferenceEngine_tpl.h.

882  {
883  std::string var_name = credalNet_->current_bn().variable(id).name();
884  auto delim = var_name.find_first_of("_");
885 
886  var_name = var_name.substr(0, delim);
887 
888  if (modal_.exists(var_name) /*modal_.find(var_name) != modal_.end()*/) {
889  GUM_SCALAR exp = 0;
890  auto vsize = vertex.size();
891 
892  for (Size mod = 0; mod < vsize; mod++)
893  exp += vertex[mod] * modal_[var_name][mod];
894 
895  if (exp > expectationMax_[id]) expectationMax_[id] = exp;
896 
897  if (exp < expectationMin_[id]) expectationMin_[id] = exp;
898  }
899  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ updateMarginals_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateMarginals_ ( )
inlineprotectedinherited

Fusion of threads marginals.

Definition at line 267 of file multipleInferenceEngine_tpl.h.

267  {
268 #pragma omp parallel
269  {
270  int threadId = getThreadNumber();
271  long nsize = long(workingSet_[threadId]->size());
272 
273 #pragma omp for
274 
275  for (long i = 0; i < nsize; i++) {
276  Size dSize = Size(l_marginalMin_[threadId][i].size());
277 
278  for (Size j = 0; j < dSize; j++) {
279  Size tsize = Size(l_marginalMin_.size());
280 
281  // go through all threads
282  for (Size tId = 0; tId < tsize; tId++) {
283  if (l_marginalMin_[tId][i][j] < this->marginalMin_[i][j])
284  this->marginalMin_[i][j] = l_marginalMin_[tId][i][j];
285 
286  if (l_marginalMax_[tId][i][j] > this->marginalMax_[i][j])
287  this->marginalMax_[i][j] = l_marginalMax_[tId][i][j];
288  } // end of : all threads
289  } // end of : all modalities
290  } // end of : all variables
291  } // end of : parallel region
292  }
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ updateOldMarginals_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateOldMarginals_ ( )
protectedinherited

Update old marginals (from current marginals).

Call this once to initialize old marginals (after burn-in for example) and then use computeEpsilon_ which does the same job but compute epsilon too.

Definition at line 338 of file multipleInferenceEngine_tpl.h.

338  {
339 #pragma omp parallel
340  {
341  int threadId = getThreadNumber();
342  long nsize = long(workingSet_[threadId]->size());
343 
344 #pragma omp for
345 
346  for (long i = 0; i < nsize; i++) {
347  Size dSize = Size(l_marginalMin_[threadId][i].size());
348 
349  for (Size j = 0; j < dSize; j++) {
350  Size tsize = Size(l_marginalMin_.size());
351 
352  // go through all threads
353  for (Size tId = 0; tId < tsize; tId++) {
354  if (l_marginalMin_[tId][i][j] < this->oldMarginalMin_[i][j])
355  this->oldMarginalMin_[i][j] = l_marginalMin_[tId][i][j];
356 
357  if (l_marginalMax_[tId][i][j] > this->oldMarginalMax_[i][j])
358  this->oldMarginalMax_[i][j] = l_marginalMax_[tId][i][j];
359  } // end of : all threads
360  } // end of : all modalities
361  } // end of : all variables
362  } // end of : parallel region
363  }
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ updateThread_()

template<typename GUM_SCALAR , class BNInferenceEngine >
bool gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateThread_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id.

Parameters
idThe id of the node to be updated.
vertexThe vertex.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.
Returns
True if the IBayesNet is kept (for now), False otherwise.

Definition at line 85 of file multipleInferenceEngine_tpl.h.

88  {
89  int tId = getThreadNumber();
90 
91  // save E(X) if we don't save vertices
92  if (!_infE_::storeVertices_ && !l_modal_[tId].empty()) {
93  std::string var_name = workingSet_[tId]->variable(id).name();
94  auto delim = var_name.find_first_of("_");
95  var_name = var_name.substr(0, delim);
96 
97  if (l_modal_[tId].exists(var_name)) {
98  GUM_SCALAR exp = 0;
99  Size vsize = Size(vertex.size());
100 
101  for (Size mod = 0; mod < vsize; mod++)
102  exp += vertex[mod] * l_modal_[tId][var_name][mod];
103 
104  if (exp > l_expectationMax_[tId][id]) l_expectationMax_[tId][id] = exp;
105 
106  if (exp < l_expectationMin_[tId][id]) l_expectationMin_[tId][id] = exp;
107  }
108  } // end of : if modal (map) not empty
109 
110  bool newOne = false;
111  bool added = false;
112  bool result = false;
113  // for burn in, we need to keep checking on local marginals and not global
114  // ones
115  // (faster inference)
116  // we also don't want to store dbn for observed variables since there will
117  // be a
118  // huge number of them (probably all of them).
119  Size vsize = Size(vertex.size());
120 
121  for (Size mod = 0; mod < vsize; mod++) {
122  if (vertex[mod] < l_marginalMin_[tId][id][mod]) {
123  l_marginalMin_[tId][id][mod] = vertex[mod];
124  newOne = true;
125 
126  if (_infE_::storeBNOpt_ && !_infE_::evidence_.exists(id)) {
127  std::vector< Size > key(3);
128  key[0] = id;
129  key[1] = mod;
130  key[2] = 0;
131 
132  if (l_optimalNet_[tId]->insert(key, true)) result = true;
133  }
134  }
135 
136  if (vertex[mod] > l_marginalMax_[tId][id][mod]) {
137  l_marginalMax_[tId][id][mod] = vertex[mod];
138  newOne = true;
139 
140  if (_infE_::storeBNOpt_ && !_infE_::evidence_.exists(id)) {
141  std::vector< Size > key(3);
142  key[0] = id;
143  key[1] = mod;
144  key[2] = 1;
145 
146  if (l_optimalNet_[tId]->insert(key, true)) result = true;
147  }
148  } else if (vertex[mod] == l_marginalMin_[tId][id][mod]
149  || vertex[mod] == l_marginalMax_[tId][id][mod]) {
150  newOne = true;
151 
152  if (_infE_::storeBNOpt_ && vertex[mod] == l_marginalMin_[tId][id][mod]
153  && !_infE_::evidence_.exists(id)) {
154  std::vector< Size > key(3);
155  key[0] = id;
156  key[1] = mod;
157  key[2] = 0;
158 
159  if (l_optimalNet_[tId]->insert(key, false)) result = true;
160  }
161 
162  if (_infE_::storeBNOpt_ && vertex[mod] == l_marginalMax_[tId][id][mod]
163  && !_infE_::evidence_.exists(id)) {
164  std::vector< Size > key(3);
165  key[0] = id;
166  key[1] = mod;
167  key[2] = 1;
168 
169  if (l_optimalNet_[tId]->insert(key, false)) result = true;
170  }
171  }
172 
173  // store point to compute credal set vertices.
174  // check for redundancy at each step or at the end ?
175  if (_infE_::storeVertices_ && !added && newOne) {
176  _updateThreadCredalSets_(id, vertex, elimRedund);
177  added = true;
178  }
179  }
180 
181  // if all variables didn't get better marginals, we will delete
182  if (_infE_::storeBNOpt_ && result) return true;
183 
184  return false;
185  }
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
margi evidence_
Holds observed variables states.
void _updateThreadCredalSets_(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund)
Ask for redundancy elimination of a node credal set of a calling thread.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
virtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 139 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

139 { return verbosity_; }
bool verbosity_
If true, verbosity is enabled.
+ Here is the call graph for this function:

◆ vertices()

template<typename GUM_SCALAR >
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId  id) const
inherited

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 513 of file inferenceEngine_tpl.h.

513  {
514  return marginalSets_[id];
515  }
credalSet marginalSets_
Credal sets vertices, if enabled.

◆ verticesFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::verticesFusion_ ( )
protectedinherited
Deprecated:
Fusion of threads vertices.

Definition at line 366 of file multipleInferenceEngine_tpl.h.

366  {
367  // don't create threads if there are no vertices saved
368  if (!_infE_::storeVertices_) return;
369 
370 #pragma omp parallel
371  {
372  int threadId = getThreadNumber();
373  Size nsize = Size(workingSet_[threadId]->size());
374 
375 #pragma omp for
376 
377  for (long i = 0; i < long(nsize); i++) {
378  Size tsize = Size(l_marginalMin_.size());
379 
380  // go through all threads
381  for (long tId = 0; tId < long(tsize); tId++) {
382  auto& nodeThreadCredalSet = l_marginalSets_[tId][i];
383 
384  // for each vertex, if we are at any opt marginal, add it to the set
385  for (const auto& vtx: nodeThreadCredalSet) {
386  // we run redundancy elimination at each step
387  // because there could be 100000 threads and the set will be so
388  // huge
389  // ...
390  // BUT not if vertices are of dimension 2 ! opt check and equality
391  // should be enough
392  _infE_::updateCredalSets_(i, vtx, (vtx.size() > 2) ? true : false);
393  } // end of : nodeThreadCredalSet
394  } // end of : all threads
395  } // end of : all variables
396  } // end of : parallel region
397  }
_credalSets_ l_marginalSets_
Threads vertices.
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
void updateCredalSets_(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Given a node id and one of it&#39;s possible vertex, update it&#39;s credal set.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
std::vector< _bnet_ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

Member Data Documentation

◆ burn_in_

Size gum::ApproximationScheme::burn_in_
protectedinherited

Number of iterations before checking stopping criteria.

Definition at line 413 of file approximationScheme.h.

◆ credalNet_

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::InferenceEngine< GUM_SCALAR >::credalNet_
protectedinherited

A pointer to the Credal Net used.

Definition at line 68 of file inferenceEngine.h.

◆ current_epsilon_

double gum::ApproximationScheme::current_epsilon_
protectedinherited

Current epsilon.

Definition at line 368 of file approximationScheme.h.

◆ current_rate_

double gum::ApproximationScheme::current_rate_
protectedinherited

Current rate.

Definition at line 374 of file approximationScheme.h.

◆ current_state_

ApproximationSchemeSTATE gum::ApproximationScheme::current_state_
protectedinherited

The current state.

Definition at line 383 of file approximationScheme.h.

◆ current_step_

Size gum::ApproximationScheme::current_step_
protectedinherited

The current step.

Definition at line 377 of file approximationScheme.h.

◆ dbnOpt_

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::dbnOpt_
protectedinherited

Object used to efficiently store optimal bayes net during inference, for some algorithms.

Definition at line 141 of file inferenceEngine.h.

◆ dynamicExpMax_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax_
protectedinherited

Upper dynamic expectations.

If the network if not dynamic it's content is the same as expectationMax_.

Definition at line 95 of file inferenceEngine.h.

◆ dynamicExpMin_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin_
protectedinherited

Lower dynamic expectations.

If the network is not dynamic it's content is the same as expectationMin_.

Definition at line 92 of file inferenceEngine.h.

◆ enabled_eps_

bool gum::ApproximationScheme::enabled_eps_
protectedinherited

If true, the threshold convergence is enabled.

Definition at line 392 of file approximationScheme.h.

◆ enabled_max_iter_

bool gum::ApproximationScheme::enabled_max_iter_
protectedinherited

If true, the maximum iterations stopping criterion is enabled.

Definition at line 410 of file approximationScheme.h.

◆ enabled_max_time_

bool gum::ApproximationScheme::enabled_max_time_
protectedinherited

If true, the timeout is enabled.

Definition at line 404 of file approximationScheme.h.

◆ enabled_min_rate_eps_

bool gum::ApproximationScheme::enabled_min_rate_eps_
protectedinherited

If true, the minimal threshold for epsilon rate is enabled.

Definition at line 398 of file approximationScheme.h.

◆ eps_

double gum::ApproximationScheme::eps_
protectedinherited

Threshold for convergence.

Definition at line 389 of file approximationScheme.h.

◆ evidence_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::evidence_
protectedinherited

Holds observed variables states.

Definition at line 101 of file inferenceEngine.h.

◆ expectationMax_

template<typename GUM_SCALAR >
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax_
protectedinherited

Upper expectations, if some variables modalities were inserted.

Definition at line 88 of file inferenceEngine.h.

◆ expectationMin_

template<typename GUM_SCALAR >
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin_
protectedinherited

Lower expectations, if some variables modalities were inserted.

Definition at line 85 of file inferenceEngine.h.

◆ history_

std::vector< double > gum::ApproximationScheme::history_
protectedinherited

The scheme history, used only if verbosity == true.

Definition at line 386 of file approximationScheme.h.

◆ l_clusters_

template<typename GUM_SCALAR , class BNInferenceEngine >
_clusters_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_clusters_
protectedinherited

Threads clusters.

Definition at line 105 of file multipleInferenceEngine.h.

◆ l_evidence_

template<typename GUM_SCALAR , class BNInferenceEngine >
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_evidence_
protectedinherited

Threads evidence.

Definition at line 103 of file multipleInferenceEngine.h.

◆ l_expectationMax_

template<typename GUM_SCALAR , class BNInferenceEngine >
_expes_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMax_
protectedinherited

Threads upper expectations, one per thread.

Definition at line 97 of file multipleInferenceEngine.h.

◆ l_expectationMin_

template<typename GUM_SCALAR , class BNInferenceEngine >
_expes_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMin_
protectedinherited

Threads lower expectations, one per thread.

Definition at line 95 of file multipleInferenceEngine.h.

◆ l_inferenceEngine_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< BNInferenceEngine* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_inferenceEngine_
protectedinherited

Threads BNInferenceEngine.

Definition at line 113 of file multipleInferenceEngine.h.

◆ l_marginalMax_

template<typename GUM_SCALAR , class BNInferenceEngine >
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMax_
protectedinherited

Threads upper marginals, one per thread.

Definition at line 93 of file multipleInferenceEngine.h.

◆ l_marginalMin_

template<typename GUM_SCALAR , class BNInferenceEngine >
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMin_
protectedinherited

Threads lower marginals, one per thread.

Definition at line 91 of file multipleInferenceEngine.h.

◆ l_marginalSets_

template<typename GUM_SCALAR , class BNInferenceEngine >
_credalSets_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalSets_
protectedinherited

Threads vertices.

Definition at line 101 of file multipleInferenceEngine.h.

◆ l_modal_

template<typename GUM_SCALAR , class BNInferenceEngine >
_modals_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_modal_
protectedinherited

Threads modalities.

Definition at line 99 of file multipleInferenceEngine.h.

◆ l_optimalNet_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< VarMod2BNsMap< GUM_SCALAR >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_optimalNet_
protectedinherited

Threads optimal IBayesNet.

Definition at line 115 of file multipleInferenceEngine.h.

◆ last_epsilon_

double gum::ApproximationScheme::last_epsilon_
protectedinherited

Last epsilon value.

Definition at line 371 of file approximationScheme.h.

◆ marginalMax_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_
protectedinherited

Upper marginals.

Definition at line 78 of file inferenceEngine.h.

◆ marginalMin_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_
protectedinherited

Lower marginals.

Definition at line 76 of file inferenceEngine.h.

◆ marginalSets_

template<typename GUM_SCALAR >
credalSet gum::credal::InferenceEngine< GUM_SCALAR >::marginalSets_
protectedinherited

Credal sets vertices, if enabled.

Definition at line 81 of file inferenceEngine.h.

◆ max_iter_

Size gum::ApproximationScheme::max_iter_
protectedinherited

The maximum iterations.

Definition at line 407 of file approximationScheme.h.

◆ max_time_

double gum::ApproximationScheme::max_time_
protectedinherited

The timeout.

Definition at line 401 of file approximationScheme.h.

◆ min_rate_eps_

double gum::ApproximationScheme::min_rate_eps_
protectedinherited

Threshold for the epsilon rate.

Definition at line 395 of file approximationScheme.h.

◆ modal_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::modal_
protectedinherited

Variables modalities used to compute expectations.

Definition at line 98 of file inferenceEngine.h.

◆ oldMarginalMax_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMax_
protectedinherited

Old upper marginals used to compute epsilon.

Definition at line 73 of file inferenceEngine.h.

◆ oldMarginalMin_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMin_
protectedinherited

Old lower marginals used to compute epsilon.

Definition at line 71 of file inferenceEngine.h.

◆ onProgress

Signaler3< Size, double, double > gum::IApproximationSchemeConfiguration::onProgress
inherited

Progression, error and time.

Definition at line 58 of file IApproximationSchemeConfiguration.h.

◆ onStop

Signaler1< std::string > gum::IApproximationSchemeConfiguration::onStop
inherited

Criteria messageApproximationScheme.

Definition at line 61 of file IApproximationSchemeConfiguration.h.

◆ period_size_

Size gum::ApproximationScheme::period_size_
protectedinherited

Checking criteria frequency.

Definition at line 416 of file approximationScheme.h.

◆ query_

template<typename GUM_SCALAR >
query gum::credal::InferenceEngine< GUM_SCALAR >::query_
protectedinherited

Holds the query nodes states.

Definition at line 103 of file inferenceEngine.h.

◆ repetitiveInd_

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
bool gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::repetitiveInd_
protected

Definition at line 126 of file CNMonteCarloSampling.h.

◆ storeBNOpt_

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt_
protectedinherited

Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

The algorithms stops if no changes occured within 1000 iterations by default. int iterStop_; True is optimal bayes net are stored, for each variable and each modality, False otherwise. Not all algorithms offers this option. False by default.

Definition at line 137 of file inferenceEngine.h.

◆ storeVertices_

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_
protectedinherited

True if credal sets vertices are stored, False otherwise.

False by default.

Definition at line 123 of file inferenceEngine.h.

◆ t0_

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t0_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t0_ is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 111 of file inferenceEngine.h.

◆ t1_

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t1_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t1_ is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 118 of file inferenceEngine.h.

◆ timer_

Timer gum::ApproximationScheme::timer_
protectedinherited

The timer.

Definition at line 380 of file approximationScheme.h.

◆ timeSteps_

template<typename GUM_SCALAR >
int gum::credal::InferenceEngine< GUM_SCALAR >::timeSteps_
protectedinherited

The number of time steps of this network (only usefull for dynamic networks).

Deprecated:

Definition at line 148 of file inferenceEngine.h.

◆ verbosity_

bool gum::ApproximationScheme::verbosity_
protectedinherited

If true, verbosity is enabled.

Definition at line 419 of file approximationScheme.h.

◆ workingSet_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< _bnet_* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSet_
protectedinherited

Threads IBayesNet.

Definition at line 108 of file multipleInferenceEngine.h.

◆ workingSetE_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< List< const Potential< GUM_SCALAR >* >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSetE_
protectedinherited

Threads evidence.

Definition at line 110 of file multipleInferenceEngine.h.


The documentation for this class was generated from the following files: