aGrUM  0.20.2
a C++ library for (probabilistic) graphical models
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine > Class Template Referenceabstract

Class template representing a CredalNet inference engine using one or more IBayesNet inference engines such as LazyPropagation. More...

#include <multipleInferenceEngine.h>

+ Inheritance diagram for gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >:
+ Collaboration diagram for gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >:

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time. More...
 
Signaler1< std::string > onStop
 Criteria messageApproximationScheme. More...
 

Public Member Functions

Constructors / Destructors
 MultipleInferenceEngine (const CredalNet< GUM_SCALAR > &credalNet)
 Constructor. More...
 
virtual ~MultipleInferenceEngine ()
 Destructor. More...
 
Post-inference methods
virtual void eraseAllEvidence ()
 Erase all inference related data to perform another one. More...
 
Pure virtual methods
virtual void makeInference ()=0
 To be redefined by each credal net algorithm. More...
 
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet. More...
 
const CredalNet< GUM_SCALAR > & credalNet () const
 Get this creadal network. More...
 
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the t0_ cluster. More...
 
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the t1_ cluster. More...
 
void setRepetitiveInd (const bool repetitive)
 
void storeVertices (const bool value)
 
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms. More...
 
void storeBNOpt (const bool value)
 
bool storeBNOpt () const
 
bool repetitiveInd () const
 Get the current independence status. More...
 
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations. More...
 
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations. More...
 
virtual void insertEvidenceFile (const std::string &path)
 Insert evidence from file. More...
 
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map. More...
 
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property. More...
 
void insertQueryFile (const std::string &path)
 Insert query variables states from file. More...
 
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property. More...
 
Post-inference methods
Potential< GUM_SCALAR > marginalMin (const NodeId id) const
 Get the lower marginals of a given node id. More...
 
Potential< GUM_SCALAR > marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name. More...
 
Potential< GUM_SCALAR > marginalMax (const NodeId id) const
 Get the upper marginals of a given node id. More...
 
Potential< GUM_SCALAR > marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name. More...
 
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id. More...
 
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name. More...
 
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id. More...
 
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id. More...
 
void saveMarginals (const std::string &path) const
 Saves marginals to file. More...
 
void saveExpectations (const std::string &path) const
 Saves expectations to file. More...
 
void saveVertices (const std::string &path) const
 Saves vertices to file. More...
 
void dynamicExpectations ()
 Compute dynamic expectations. More...
 
std::string toString () const
 Print all nodes marginals to standart output. More...
 
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state. More...
 
Getters and setters
void setEpsilon (double eps)
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|. More...
 
double epsilon () const
 Returns the value of epsilon. More...
 
void disableEpsilon ()
 Disable stopping criterion on epsilon. More...
 
void enableEpsilon ()
 Enable stopping criterion on epsilon. More...
 
bool isEnabledEpsilon () const
 Returns true if stopping criterion on epsilon is enabled, false otherwise. More...
 
void setMinEpsilonRate (double rate)
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|). More...
 
double minEpsilonRate () const
 Returns the value of the minimal epsilon rate. More...
 
void disableMinEpsilonRate ()
 Disable stopping criterion on epsilon rate. More...
 
void enableMinEpsilonRate ()
 Enable stopping criterion on epsilon rate. More...
 
bool isEnabledMinEpsilonRate () const
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise. More...
 
void setMaxIter (Size max)
 Stopping criterion on number of iterations. More...
 
Size maxIter () const
 Returns the criterion on number of iterations. More...
 
void disableMaxIter ()
 Disable stopping criterion on max iterations. More...
 
void enableMaxIter ()
 Enable stopping criterion on max iterations. More...
 
bool isEnabledMaxIter () const
 Returns true if stopping criterion on max iterations is enabled, false otherwise. More...
 
void setMaxTime (double timeout)
 Stopping criterion on timeout. More...
 
double maxTime () const
 Returns the timeout (in seconds). More...
 
double currentTime () const
 Returns the current running time in second. More...
 
void disableMaxTime ()
 Disable stopping criterion on timeout. More...
 
void enableMaxTime ()
 Enable stopping criterion on timeout. More...
 
bool isEnabledMaxTime () const
 Returns true if stopping criterion on timeout is enabled, false otherwise. More...
 
void setPeriodSize (Size p)
 How many samples between two stopping is enable. More...
 
Size periodSize () const
 Returns the period size. More...
 
void setVerbosity (bool v)
 Set the verbosity on (true) or off (false). More...
 
bool verbosity () const
 Returns true if verbosity is enabled. More...
 
ApproximationSchemeSTATE stateApproximationScheme () const
 Returns the approximation scheme state. More...
 
Size nbrIterations () const
 Returns the number of iterations. More...
 
const std::vector< double > & history () const
 Returns the scheme history. More...
 
void initApproximationScheme ()
 Initialise the scheme. More...
 
bool startOfPeriod ()
 Returns true if we are at the beginning of a period (compute error is mandatory). More...
 
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps. More...
 
Size remainingBurnIn ()
 Returns the remaining burn in. More...
 
void stopApproximationScheme ()
 Stop the approximation scheme. More...
 
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error. More...
 
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message. More...
 

Public Types

enum  ApproximationSchemeSTATE : char {
  ApproximationSchemeSTATE::Undefined, ApproximationSchemeSTATE::Continue, ApproximationSchemeSTATE::Epsilon, ApproximationSchemeSTATE::Rate,
  ApproximationSchemeSTATE::Limit, ApproximationSchemeSTATE::TimeLimit, ApproximationSchemeSTATE::Stopped
}
 The different state of an approximation scheme. More...
 

Protected Attributes

margis__ l_marginalMin_
 Threads lower marginals, one per thread. More...
 
margis__ l_marginalMax_
 Threads upper marginals, one per thread. More...
 
expes__ l_expectationMin_
 Threads lower expectations, one per thread. More...
 
expes__ l_expectationMax_
 Threads upper expectations, one per thread. More...
 
modals__ l_modal_
 Threads modalities. More...
 
credalSets__ l_marginalSets_
 Threads vertices. More...
 
margis__ l_evidence_
 Threads evidence. More...
 
clusters__ l_clusters_
 Threads clusters. More...
 
std::vector< bnet__ *> workingSet_
 Threads IBayesNet. More...
 
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
 Threads evidence. More...
 
std::vector< BNInferenceEngine *> l_inferenceEngine_
 Threads BNInferenceEngine. More...
 
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
 Threads optimal IBayesNet. More...
 
const CredalNet< GUM_SCALAR > * credalNet_
 A pointer to the Credal Net used. More...
 
margi oldMarginalMin_
 Old lower marginals used to compute epsilon. More...
 
margi oldMarginalMax_
 Old upper marginals used to compute epsilon. More...
 
margi marginalMin_
 Lower marginals. More...
 
margi marginalMax_
 Upper marginals. More...
 
credalSet marginalSets_
 Credal sets vertices, if enabled. More...
 
expe expectationMin_
 Lower expectations, if some variables modalities were inserted. More...
 
expe expectationMax_
 Upper expectations, if some variables modalities were inserted. More...
 
dynExpe dynamicExpMin_
 Lower dynamic expectations. More...
 
dynExpe dynamicExpMax_
 Upper dynamic expectations. More...
 
dynExpe modal_
 Variables modalities used to compute expectations. More...
 
margi evidence_
 Holds observed variables states. More...
 
query query_
 Holds the query nodes states. More...
 
cluster t0_
 Clusters of nodes used with dynamic networks. More...
 
cluster t1_
 Clusters of nodes used with dynamic networks. More...
 
bool storeVertices_
 True if credal sets vertices are stored, False otherwise. More...
 
bool repetitiveInd_
 True if using repetitive independence ( dynamic network only ), False otherwise. More...
 
bool storeBNOpt_
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling. More...
 
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
 Object used to efficiently store optimal bayes net during inference, for some algorithms. More...
 
int timeSteps_
 The number of time steps of this network (only usefull for dynamic networks). More...
 
double current_epsilon_
 Current epsilon. More...
 
double last_epsilon_
 Last epsilon value. More...
 
double current_rate_
 Current rate. More...
 
Size current_step_
 The current step. More...
 
Timer timer_
 The timer. More...
 
ApproximationSchemeSTATE current_state_
 The current state. More...
 
std::vector< doublehistory_
 The scheme history, used only if verbosity == true. More...
 
double eps_
 Threshold for convergence. More...
 
bool enabled_eps_
 If true, the threshold convergence is enabled. More...
 
double min_rate_eps_
 Threshold for the epsilon rate. More...
 
bool enabled_min_rate_eps_
 If true, the minimal threshold for epsilon rate is enabled. More...
 
double max_time_
 The timeout. More...
 
bool enabled_max_time_
 If true, the timeout is enabled. More...
 
Size max_iter_
 The maximum iterations. More...
 
bool enabled_max_iter_
 If true, the maximum iterations stopping criterion is enabled. More...
 
Size burn_in_
 Number of iterations before checking stopping criteria. More...
 
Size period_size_
 Checking criteria frequency. More...
 
bool verbosity_
 If true, verbosity is enabled. More...
 

Protected Member Functions

Protected initialization methods

Fusion of threads optimal IBayesNet.

void initThreadsData_ (const Size &num_threads, const bool storeVertices__, const bool storeBNOpt__)
 Initialize threads data. More...
 
Protected algorithms methods
bool updateThread_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id. More...
 
void updateMarginals_ ()
 Fusion of threads marginals. More...
 
const GUM_SCALAR computeEpsilon_ ()
 Compute epsilon and update old marginals. More...
 
void updateOldMarginals_ ()
 Update old marginals (from current marginals). More...
 
Proptected post-inference methods
void optFusion_ ()
 Fusion of threads optimal IBayesNet. More...
 
void expFusion_ ()
 Fusion of threads expectations. More...
 
void verticesFusion_ ()
 
Protected initialization methods
void repetitiveInit_ ()
 Initialize t0_ and t1_ clusters. More...
 
void initExpectations_ ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality. More...
 
void initMarginals_ ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0. More...
 
void initMarginalSets_ ()
 Initialize credal set vertices with empty sets. More...
 
Protected algorithms methods
void updateExpectations_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations. More...
 
void updateCredalSets_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set. More...
 
Proptected post-inference methods
void dynamicExpectations_ ()
 Rearrange lower and upper expectations to suit dynamic networks. More...
 

Detailed Description

template<typename GUM_SCALAR, class BNInferenceEngine>
class gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >

Class template representing a CredalNet inference engine using one or more IBayesNet inference engines such as LazyPropagation.

Extends InferenceEngine< GUM_SCALAR >. Used for outer multi-threading such as CNMonteCarloSampling.

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
BNInferenceEngineA IBayesNet inference engine such as LazyPropagation.
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN()

Definition at line 54 of file multipleInferenceEngine.h.

Member Typedef Documentation

◆ bnet__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef IBayesNet< GUM_SCALAR > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::bnet__
private

Definition at line 64 of file multipleInferenceEngine.h.

◆ cluster__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef NodeProperty< std::vector< NodeId > > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::cluster__
private

Definition at line 59 of file multipleInferenceEngine.h.

◆ clusters__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef std::vector< std::vector< cluster__ > > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::clusters__
private

Definition at line 68 of file multipleInferenceEngine.h.

◆ credalSet__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef NodeProperty< std::vector< std::vector< GUM_SCALAR > > > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::credalSet__
private

Definition at line 60 of file multipleInferenceEngine.h.

◆ credalSets__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef std::vector< credalSet__ > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::credalSets__
private

Definition at line 67 of file multipleInferenceEngine.h.

◆ expe__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef NodeProperty< GUM_SCALAR > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::expe__
private

Definition at line 62 of file multipleInferenceEngine.h.

◆ expes__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef std::vector< expe__ > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::expes__
private

Definition at line 66 of file multipleInferenceEngine.h.

◆ infE__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef InferenceEngine< GUM_SCALAR > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::infE__
private

To easily access InferenceEngine< GUM_SCALAR > methods.

Definition at line 57 of file multipleInferenceEngine.h.

◆ margi__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef NodeProperty< std::vector< GUM_SCALAR > > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::margi__
private

Definition at line 61 of file multipleInferenceEngine.h.

◆ margis__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef std::vector< margi__ > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::margis__
private

Definition at line 65 of file multipleInferenceEngine.h.

◆ modals__

template<typename GUM_SCALAR , class BNInferenceEngine >
typedef std::vector< HashTable< std::string, std::vector< GUM_SCALAR > > > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::modals__
private

Definition at line 72 of file multipleInferenceEngine.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 64 of file IApproximationSchemeConfiguration.h.

64  : char
65  {
66  Undefined,
67  Continue,
68  Epsilon,
69  Rate,
70  Limit,
71  TimeLimit,
72  Stopped
73  };

Constructor & Destructor Documentation

◆ MultipleInferenceEngine()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::MultipleInferenceEngine ( const CredalNet< GUM_SCALAR > &  credalNet)
explicit

Constructor.

Parameters
credalNetThe CredalNet to be used.

Definition at line 30 of file multipleInferenceEngine_tpl.h.

30  :
32  GUM_CONSTRUCTOR(MultipleInferenceEngine);
33  }
MultipleInferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.
const CredalNet< GUM_SCALAR > & credalNet() const
Get this creadal network.

◆ ~MultipleInferenceEngine()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::~MultipleInferenceEngine ( )
virtual

Destructor.

Definition at line 37 of file multipleInferenceEngine_tpl.h.

37  {
38  GUM_DESTRUCTOR(MultipleInferenceEngine);
39  }
MultipleInferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.

Member Function Documentation

◆ computeEpsilon_()

template<typename GUM_SCALAR , class BNInferenceEngine >
const GUM_SCALAR gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::computeEpsilon_ ( )
inlineprotected

Compute epsilon and update old marginals.

Returns
Epsilon.

Definition at line 303 of file multipleInferenceEngine_tpl.h.

303  {
304  GUM_SCALAR eps = 0;
305 #pragma omp parallel
306  {
307  GUM_SCALAR tEps = 0;
308  GUM_SCALAR delta;
309 
310  int tId = getThreadNumber();
311  long nsize = long(workingSet_[tId]->size());
312 
313 #pragma omp for
314 
315  for (long i = 0; i < nsize; i++) {
316  Size dSize = Size(l_marginalMin_[tId][i].size());
317 
318  for (Size j = 0; j < dSize; j++) {
319  // on min
320  delta = this->marginalMin_[i][j] - this->oldMarginalMin_[i][j];
321  delta = (delta < 0) ? (-delta) : delta;
322  tEps = (tEps < delta) ? delta : tEps;
323 
324  // on max
325  delta = this->marginalMax_[i][j] - this->oldMarginalMax_[i][j];
326  delta = (delta < 0) ? (-delta) : delta;
327  tEps = (tEps < delta) ? delta : tEps;
328 
329  this->oldMarginalMin_[i][j] = this->marginalMin_[i][j];
330  this->oldMarginalMax_[i][j] = this->marginalMax_[i][j];
331  }
332  } // end of : all variables
333 
334 #pragma omp critical(epsilon_max)
335  {
336 #pragma omp flush(eps)
337  eps = (eps < tEps) ? tEps : eps;
338  }
339 
340  } // end of : parallel region
341  return eps;
342  }
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double  error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 226 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

226  {
227  // For coherence, we fix the time used in the method
228 
229  double timer_step = timer_.step();
230 
231  if (enabled_max_time_) {
232  if (timer_step > max_time_) {
234  return false;
235  }
236  }
237 
238  if (!startOfPeriod()) { return true; }
239 
241  GUM_ERROR(OperationNotAllowed,
242  "state of the approximation scheme is not correct : "
244  }
245 
246  if (verbosity()) { history_.push_back(error); }
247 
248  if (enabled_max_iter_) {
249  if (current_step_ > max_iter_) {
251  return false;
252  }
253  }
254 
256  current_epsilon_ = error; // eps rate isEnabled needs it so affectation was
257  // moved from eps isEnabled below
258 
259  if (enabled_eps_) {
260  if (current_epsilon_ <= eps_) {
262  return false;
263  }
264  }
265 
266  if (last_epsilon_ >= 0.) {
267  if (current_epsilon_ > .0) {
268  // ! current_epsilon_ can be 0. AND epsilon
269  // isEnabled can be disabled !
272  }
273  // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
274  // infinity the else means a return false if we isEnabled the rate below,
275  // as we would have returned false if epsilon isEnabled was enabled
276  else {
278  }
279 
280  if (enabled_min_rate_eps_) {
281  if (current_rate_ <= min_rate_eps_) {
283  return false;
284  }
285  }
286  }
287 
289  if (onProgress.hasListener()) {
291  }
292 
293  return true;
294  } else {
295  return false;
296  }
297  }
double max_time_
The timeout.
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:41
Signaler3< Size, double, double > onProgress
Progression, error and time.
ApproximationSchemeSTATE current_state_
The current state.
void stopScheme_(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
bool startOfPeriod()
Returns true if we are at the beginning of a period (compute error is mandatory). ...
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
double last_epsilon_
Last epsilon value.
double eps_
Threshold for convergence.
double min_rate_eps_
Threshold for the epsilon rate.
bool enabled_max_time_
If true, the timeout is enabled.
double current_rate_
Current rate.
Size max_iter_
The maximum iterations.
double current_epsilon_
Current epsilon.
bool enabled_eps_
If true, the threshold convergence is enabled.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
std::vector< double > history_
The scheme history, used only if verbosity == true.
bool verbosity() const
Returns true if verbosity is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
Size current_step_
The current step.
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition: signaler3.h:41
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ credalNet()

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( ) const
inherited

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 59 of file inferenceEngine_tpl.h.

59  {
60  return *credalNet_;
61  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
virtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 127 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

127 { return timer_.step(); }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:41
+ Here is the call graph for this function:

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
virtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 53 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

53 { enabled_eps_ = false; }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
virtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 104 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

104 { enabled_max_iter_ = false; }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
virtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 130 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

130 { enabled_max_time_ = false; }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
virtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 78 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

78  {
79  enabled_min_rate_eps_ = false;
80  }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )
inherited

Compute dynamic expectations.

See also
dynamicExpectations_ Only call this if an algorithm does not call it by itself.

Definition at line 718 of file inferenceEngine_tpl.h.

718  {
720  }
void dynamicExpectations_()
Rearrange lower and upper expectations to suit dynamic networks.

◆ dynamicExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations_ ( )
protectedinherited

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 723 of file inferenceEngine_tpl.h.

723  {
724  // no modals, no expectations computed during inference
725  if (expectationMin_.empty() || modal_.empty()) return;
726 
727  // already called by the algorithm or the user
728  if (dynamicExpMax_.size() > 0 && dynamicExpMin_.size() > 0) return;
729 
730  // typedef typename std::map< int, GUM_SCALAR > innerMap;
731  using innerMap = typename gum::HashTable< int, GUM_SCALAR >;
732 
733  // typedef typename std::map< std::string, innerMap > outerMap;
734  using outerMap = typename gum::HashTable< std::string, innerMap >;
735 
736  // typedef typename std::map< std::string, std::vector< GUM_SCALAR > >
737  // mod;
738 
739  // si non dynamique, sauver directement expectationMin_ et Max (revient au
740  // meme
741  // mais plus rapide)
742  outerMap expectationsMin, expectationsMax;
743 
744  for (const auto& elt: expectationMin_) {
745  std::string var_name, time_step;
746 
747  var_name = credalNet_->current_bn().variable(elt.first).name();
748  auto delim = var_name.find_first_of("_");
749  time_step = var_name.substr(delim + 1, var_name.size());
750  var_name = var_name.substr(0, delim);
751 
752  // to be sure (don't store not monitored variables' expectations)
753  // although it
754  // should be taken care of before this point
755  if (!modal_.exists(var_name)) continue;
756 
757  expectationsMin.getWithDefault(var_name, innerMap())
758  .getWithDefault(atoi(time_step.c_str()), 0)
759  = elt.second; // we iterate with min iterators
760  expectationsMax.getWithDefault(var_name, innerMap())
761  .getWithDefault(atoi(time_step.c_str()), 0)
762  = expectationMax_[elt.first];
763  }
764 
765  for (const auto& elt: expectationsMin) {
766  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
767 
768  for (const auto& elt2: elt.second)
769  dynExp[elt2.first] = elt2.second;
770 
771  dynamicExpMin_.insert(elt.first, dynExp);
772  }
773 
774  for (const auto& elt: expectationsMax) {
775  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
776 
777  for (const auto& elt2: elt.second) {
778  dynExp[elt2.first] = elt2.second;
779  }
780 
781  dynamicExpMax_.insert(elt.first, dynExp);
782  }
783  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
The class for generic Hash Tables.
Definition: hashTable.h:679
dynExpe modal_
Variables modalities used to compute expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ dynamicExpMax()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string &  varName) const
inherited

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 506 of file inferenceEngine_tpl.h.

507  {
508  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
509  "GUM_SCALAR >::dynamicExpMax ( const std::string & "
510  "varName ) const : ";
511 
512  if (dynamicExpMax_.empty())
513  GUM_ERROR(OperationNotAllowed,
514  errTxt + "_dynamicExpectations() needs to be called before");
515 
516  if (!dynamicExpMax_.exists(
517  varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
518  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
519 
520  return dynamicExpMax_[varName];
521  }
dynExpe dynamicExpMax_
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ dynamicExpMin()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string &  varName) const
inherited

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 488 of file inferenceEngine_tpl.h.

489  {
490  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
491  "GUM_SCALAR >::dynamicExpMin ( const std::string & "
492  "varName ) const : ";
493 
494  if (dynamicExpMin_.empty())
495  GUM_ERROR(OperationNotAllowed,
496  errTxt + "_dynamicExpectations() needs to be called before");
497 
498  if (!dynamicExpMin_.exists(
499  varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
500  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
501 
502  return dynamicExpMin_[varName];
503  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
virtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 56 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

56 { enabled_eps_ = true; }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
virtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 107 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

107 { enabled_max_iter_ = true; }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
virtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 133 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

133 { enabled_max_time_ = true; }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
virtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 83 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

83  {
84  enabled_min_rate_eps_ = true;
85  }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
virtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 50 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

50 { return eps_; }
double eps_
Threshold for convergence.
+ Here is the call graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence ( )
virtual

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 532 of file multipleInferenceEngine_tpl.h.

532  {
534  Size tsize = Size(workingSet_.size());
535 
536  // delete pointers
537  for (Size bn = 0; bn < tsize; bn++) {
538  if (infE__::storeVertices_) l_marginalSets_[bn].clear();
539 
540  if (workingSet_[bn] != nullptr) delete workingSet_[bn];
541 
543  if (l_inferenceEngine_[bn] != nullptr) delete l_optimalNet_[bn];
544 
545  if (this->workingSetE_[bn] != nullptr) {
546  for (const auto ev: *workingSetE_[bn])
547  delete ev;
548 
549  delete workingSetE_[bn];
550  }
551 
552  if (l_inferenceEngine_[bn] != nullptr) delete l_inferenceEngine_[bn];
553  }
554 
555  // this is important, those will be resized with the correct number of
556  // threads.
557 
558  workingSet_.clear();
559  workingSetE_.clear();
560  l_inferenceEngine_.clear();
561  l_optimalNet_.clear();
562 
563  l_marginalMin_.clear();
564  l_marginalMax_.clear();
565  l_expectationMin_.clear();
566  l_expectationMax_.clear();
567  l_modal_.clear();
568  l_marginalSets_.clear();
569  l_evidence_.clear();
570  l_clusters_.clear();
571  }
expes__ l_expectationMax_
Threads upper expectations, one per thread.
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
credalSets__ l_marginalSets_
Threads vertices.
clusters__ l_clusters_
Threads clusters.
expes__ l_expectationMin_
Threads lower expectations, one per thread.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
std::vector< BNInferenceEngine *> l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margis__ l_marginalMax_
Threads upper marginals, one per thread.
virtual void eraseAllEvidence()
Erase all inference related data to perform another one.

◆ expectationMax() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId  id) const
inherited

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 481 of file inferenceEngine_tpl.h.

481  {
482  try {
483  return expectationMax_[id];
484  } catch (NotFound& err) { throw(err); }
485  }
expe expectationMax_
Upper expectations, if some variables modalities were inserted.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string &  varName) const
inherited

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 464 of file inferenceEngine_tpl.h.

465  {
466  try {
467  return expectationMax_[credalNet_->current_bn().idFromName(varName)];
468  } catch (NotFound& err) { throw(err); }
469  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId  id) const
inherited

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 473 of file inferenceEngine_tpl.h.

473  {
474  try {
475  return expectationMin_[id];
476  } catch (NotFound& err) { throw(err); }
477  }
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string &  varName) const
inherited

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 456 of file inferenceEngine_tpl.h.

457  {
458  try {
459  return expectationMin_[credalNet_->current_bn().idFromName(varName)];
460  } catch (NotFound& err) { throw(err); }
461  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

◆ expFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::expFusion_ ( )
protected

Fusion of threads expectations.

Definition at line 409 of file multipleInferenceEngine_tpl.h.

409  {
410  // don't create threads if there are no modalities to compute expectations
411  if (this->modal_.empty()) return;
412 
413  // we can compute expectations from vertices of the final credal set
415 #pragma omp parallel
416  {
417  int threadId = getThreadNumber();
418 
419  if (!this->l_modal_[threadId].empty()) {
420  Size nsize = Size(workingSet_[threadId]->size());
421 
422 #pragma omp for
423 
424  for (long i = 0; i < long(nsize);
425  i++) { // i needs to be signed (due to omp with visual c++
426  // 15)
427  std::string var_name = workingSet_[threadId]->variable(i).name();
428  auto delim = var_name.find_first_of("_");
429  var_name = var_name.substr(0, delim);
430 
431  if (!l_modal_[threadId].exists(var_name)) continue;
432 
433  for (const auto& vertex: infE__::marginalSets_[i]) {
434  GUM_SCALAR exp = 0;
435  Size vsize = Size(vertex.size());
436 
437  for (Size mod = 0; mod < vsize; mod++)
438  exp += vertex[mod] * l_modal_[threadId][var_name][mod];
439 
440  if (exp > infE__::expectationMax_[i])
441  infE__::expectationMax_[i] = exp;
442 
443  if (exp < infE__::expectationMin_[i])
444  infE__::expectationMin_[i] = exp;
445  }
446  } // end of : each variable parallel for
447  } // end of : if this thread has modals
448  } // end of parallel region
449  return;
450  }
451 
452 #pragma omp parallel
453  {
454  int threadId = getThreadNumber();
455 
456  if (!this->l_modal_[threadId].empty()) {
457  Size nsize = Size(workingSet_[threadId]->size());
458 #pragma omp for
459  for (long i = 0; i < long(nsize);
460  i++) { // long instead of Idx due to omp for visual C++15
461  std::string var_name = workingSet_[threadId]->variable(i).name();
462  auto delim = var_name.find_first_of("_");
463  var_name = var_name.substr(0, delim);
464 
465  if (!l_modal_[threadId].exists(var_name)) continue;
466 
467  Size tsize = Size(l_expectationMax_.size());
468 
469  for (Idx tId = 0; tId < tsize; tId++) {
470  if (l_expectationMax_[tId][i] > this->expectationMax_[i])
471  this->expectationMax_[i] = l_expectationMax_[tId][i];
472 
473  if (l_expectationMin_[tId][i] < this->expectationMin_[i])
474  this->expectationMin_[i] = l_expectationMin_[tId][i];
475  } // end of : each thread
476  } // end of : each variable
477  } // end of : if modals not empty
478  } // end of : parallel region
479  }
expes__ l_expectationMax_
Threads upper expectations, one per thread.
expes__ l_expectationMin_
Threads lower expectations, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
credalSet marginalSets_
Credal sets vertices, if enabled.
dynExpe modal_
Variables modalities used to compute expectations.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR >
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inlineinherited

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 508 of file inferenceEngine.h.

508  {
509  return this->messageApproximationScheme();
510  }
std::string messageApproximationScheme() const
Returns the approximation scheme message.

◆ getT0Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const
inherited

Get the t0_ cluster.

Returns
A constant reference to the t0_ cluster.

Definition at line 1008 of file inferenceEngine_tpl.h.

1008  {
1009  return t0_;
1010  }
cluster t0_
Clusters of nodes used with dynamic networks.

◆ getT1Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const
inherited

Get the t1_ cluster.

Returns
A constant reference to the t1_ cluster.

Definition at line 1014 of file inferenceEngine_tpl.h.

1014  {
1015  return t1_;
1016  }
cluster t1_
Clusters of nodes used with dynamic networks.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )
inherited

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 141 of file inferenceEngine_tpl.h.

141  {
142  return &dbnOpt_;
143  }
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
virtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 172 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

172  {
174  GUM_ERROR(OperationNotAllowed,
175  "state of the approximation scheme is udefined");
176  }
177 
178  if (verbosity() == false) {
179  GUM_ERROR(OperationNotAllowed, "No history when verbosity=false");
180  }
181 
182  return history_;
183  }
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
std::vector< double > history_
The scheme history, used only if verbosity == true.
bool verbosity() const
Returns true if verbosity is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 186 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

186  {
188  current_step_ = 0;
190  history_.clear();
191  timer_.reset();
192  }
ApproximationSchemeSTATE current_state_
The current state.
void reset()
Reset the timer.
Definition: timer_inl.h:31
double current_rate_
Current rate.
double current_epsilon_
Current epsilon.
std::vector< double > history_
The scheme history, used only if verbosity == true.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ initExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initExpectations_ ( )
protectedinherited

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 697 of file inferenceEngine_tpl.h.

697  {
700 
701  if (modal_.empty()) return;
702 
703  for (auto node: credalNet_->current_bn().nodes()) {
704  std::string var_name, time_step;
705 
706  var_name = credalNet_->current_bn().variable(node).name();
707  auto delim = var_name.find_first_of("_");
708  var_name = var_name.substr(0, delim);
709 
710  if (!modal_.exists(var_name)) continue;
711 
712  expectationMin_.insert(node, modal_[var_name].back());
713  expectationMax_.insert(node, modal_[var_name].front());
714  }
715  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.

◆ initMarginals_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginals_ ( )
protectedinherited

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 665 of file inferenceEngine_tpl.h.

665  {
670 
671  for (auto node: credalNet_->current_bn().nodes()) {
672  auto dSize = credalNet_->current_bn().variable(node).domainSize();
673  marginalMin_.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
674  oldMarginalMin_.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
675 
676  marginalMax_.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
677  oldMarginalMax_.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
678  }
679  }
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
void clear()
Removes all the elements in the hash table.
margi marginalMax_
Upper marginals.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
margi marginalMin_
Lower marginals.

◆ initMarginalSets_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginalSets_ ( )
protectedinherited

Initialize credal set vertices with empty sets.

Definition at line 682 of file inferenceEngine_tpl.h.

682  {
684 
685  if (!storeVertices_) return;
686 
687  for (auto node: credalNet_->current_bn().nodes())
688  marginalSets_.insert(node, std::vector< std::vector< GUM_SCALAR > >());
689  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
credalSet marginalSets_
Credal sets vertices, if enabled.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.

◆ initThreadsData_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::initThreadsData_ ( const Size num_threads,
const bool  storeVertices__,
const bool  storeBNOpt__ 
)
inlineprotected

Initialize threads data.

Parameters
num_threadsThe number of threads.
storeVertices__True if vertices should be stored, False otherwise.
storeBNOpt__True if optimal IBayesNet should be stored, false otherwise.

Definition at line 43 of file multipleInferenceEngine_tpl.h.

46  {
47  workingSet_.clear();
48  workingSet_.resize(num_threads, nullptr);
49  workingSetE_.clear();
50  workingSetE_.resize(num_threads, nullptr);
51 
52  l_marginalMin_.clear();
53  l_marginalMin_.resize(num_threads);
54  l_marginalMax_.clear();
55  l_marginalMax_.resize(num_threads);
56  l_expectationMin_.clear();
57  l_expectationMin_.resize(num_threads);
58  l_expectationMax_.clear();
59  l_expectationMax_.resize(num_threads);
60 
61  l_clusters_.clear();
62  l_clusters_.resize(num_threads);
63 
64  if (storeVertices__) {
65  l_marginalSets_.clear();
66  l_marginalSets_.resize(num_threads);
67  }
68 
69  if (storeBNOpt__) {
70  for (Size ptr = 0; ptr < this->l_optimalNet_.size(); ptr++)
71  if (this->l_optimalNet_[ptr] != nullptr) delete l_optimalNet_[ptr];
72 
73  l_optimalNet_.clear();
74  l_optimalNet_.resize(num_threads);
75  }
76 
77  l_modal_.clear();
78  l_modal_.resize(num_threads);
79 
81  this->oldMarginalMin_ = this->marginalMin_;
82  this->oldMarginalMax_.clear();
83  this->oldMarginalMax_ = this->marginalMax_;
84  }
expes__ l_expectationMax_
Threads upper expectations, one per thread.
std::vector< List< const Potential< GUM_SCALAR > *> *> workingSetE_
Threads evidence.
credalSets__ l_marginalSets_
Threads vertices.
clusters__ l_clusters_
Threads clusters.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
expes__ l_expectationMin_
Threads lower expectations, one per thread.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > &  eviMap)
inherited

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 229 of file inferenceEngine_tpl.h.

230  {
231  if (!evidence_.empty()) evidence_.clear();
232 
233  for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
234  NodeId id;
235 
236  try {
237  id = credalNet_->current_bn().idFromName(it->first);
238  } catch (NotFound& err) {
239  GUM_SHOWERROR(err);
240  continue;
241  }
242 
243  evidence_.insert(id, it->second);
244  }
245  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi evidence_
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > &  evidence)
inherited

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 251 of file inferenceEngine_tpl.h.

252  {
253  if (!evidence_.empty()) evidence_.clear();
254 
255  // use cbegin() to get const_iterator when available in aGrUM hashtables
256  for (const auto& elt: evidence) {
257  try {
258  credalNet_->current_bn().variable(elt.first);
259  } catch (NotFound& err) {
260  GUM_SHOWERROR(err);
261  continue;
262  }
263 
264  evidence_.insert(elt.first, elt.second);
265  }
266  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi evidence_
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ insertEvidenceFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile ( const std::string &  path)
virtualinherited

Insert evidence from file.

Parameters
pathThe path to the evidence file.

Reimplemented in gum::credal::CNLoopyPropagation< GUM_SCALAR >, and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

Definition at line 270 of file inferenceEngine_tpl.h.

270  {
271  std::ifstream evi_stream(path.c_str(), std::ios::in);
272 
273  if (!evi_stream.good()) {
274  GUM_ERROR(IOError,
275  "void InferenceEngine< GUM_SCALAR "
276  ">::insertEvidence(const std::string & path) : could not "
277  "open input file : "
278  << path);
279  }
280 
281  if (!evidence_.empty()) evidence_.clear();
282 
283  std::string line, tmp;
284  char * cstr, *p;
285 
286  while (evi_stream.good() && std::strcmp(line.c_str(), "[EVIDENCE]") != 0) {
287  getline(evi_stream, line);
288  }
289 
290  while (evi_stream.good()) {
291  getline(evi_stream, line);
292 
293  if (std::strcmp(line.c_str(), "[QUERY]") == 0) break;
294 
295  if (line.size() == 0) continue;
296 
297  cstr = new char[line.size() + 1];
298  strcpy(cstr, line.c_str());
299 
300  p = strtok(cstr, " ");
301  tmp = p;
302 
303  // if user input is wrong
304  NodeId node = -1;
305 
306  try {
307  node = credalNet_->current_bn().idFromName(tmp);
308  } catch (NotFound& err) {
309  GUM_SHOWERROR(err);
310  continue;
311  }
312 
313  std::vector< GUM_SCALAR > values;
314  p = strtok(nullptr, " ");
315 
316  while (p != nullptr) {
317  values.push_back(GUM_SCALAR(atof(p)));
318  p = strtok(nullptr, " ");
319  } // end of : line
320 
321  evidence_.insert(node, values);
322 
323  delete[] p;
324  delete[] cstr;
325  } // end of : file
326 
327  evi_stream.close();
328  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi evidence_
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ insertModals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > &  modals)
inherited

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 193 of file inferenceEngine_tpl.h.

194  {
195  if (!modal_.empty()) modal_.clear();
196 
197  for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
198  NodeId id;
199 
200  try {
201  id = credalNet_->current_bn().idFromName(it->first);
202  } catch (NotFound& err) {
203  GUM_SHOWERROR(err);
204  continue;
205  }
206 
207  // check that modals are net compatible
208  auto dSize = credalNet_->current_bn().variable(id).domainSize();
209 
210  if (dSize != it->second.size()) continue;
211 
212  // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
213  // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
214  // > >
215  // &modals) : modalities does not respect variable cardinality : " <<
216  // credalNet_->current_bn().variable( id ).name() << " : " << dSize << "
217  // != "
218  // << it->second.size());
219 
220  modal_.insert(it->first, it->second); //[ it->first ] = it->second;
221  }
222 
223  //_modal = modals;
224 
226  }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
Size NodeId
Type for node ids.
Definition: graphElements.h:97

◆ insertModalsFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string &  path)
inherited

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 146 of file inferenceEngine_tpl.h.

146  {
147  std::ifstream mod_stream(path.c_str(), std::ios::in);
148 
149  if (!mod_stream.good()) {
150  GUM_ERROR(OperationNotAllowed,
151  "void InferenceEngine< GUM_SCALAR "
152  ">::insertModals(const std::string & path) : "
153  "could not open input file : "
154  << path);
155  }
156 
157  if (!modal_.empty()) modal_.clear();
158 
159  std::string line, tmp;
160  char * cstr, *p;
161 
162  while (mod_stream.good()) {
163  getline(mod_stream, line);
164 
165  if (line.size() == 0) continue;
166 
167  cstr = new char[line.size() + 1];
168  strcpy(cstr, line.c_str());
169 
170  p = strtok(cstr, " ");
171  tmp = p;
172 
173  std::vector< GUM_SCALAR > values;
174  p = strtok(nullptr, " ");
175 
176  while (p != nullptr) {
177  values.push_back(GUM_SCALAR(atof(p)));
178  p = strtok(nullptr, " ");
179  } // end of : line
180 
181  modal_.insert(tmp, values); //[tmp] = values;
182 
183  delete[] p;
184  delete[] cstr;
185  } // end of : file
186 
187  mod_stream.close();
188 
190  }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
dynExpe modal_
Variables modalities used to compute expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ insertQuery()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > &  query)
inherited

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 331 of file inferenceEngine_tpl.h.

332  {
333  if (!query_.empty()) query_.clear();
334 
335  for (const auto& elt: query) {
336  try {
337  credalNet_->current_bn().variable(elt.first);
338  } catch (NotFound& err) {
339  GUM_SHOWERROR(err);
340  continue;
341  }
342 
343  query_.insert(elt.first, elt.second);
344  }
345  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
query query_
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
NodeProperty< std::vector< bool > > query
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ insertQueryFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string &  path)
inherited

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 348 of file inferenceEngine_tpl.h.

348  {
349  std::ifstream evi_stream(path.c_str(), std::ios::in);
350 
351  if (!evi_stream.good()) {
352  GUM_ERROR(IOError,
353  "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
354  "std::string & path) : could not open input file : "
355  << path);
356  }
357 
358  if (!query_.empty()) query_.clear();
359 
360  std::string line, tmp;
361  char * cstr, *p;
362 
363  while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
364  getline(evi_stream, line);
365  }
366 
367  while (evi_stream.good()) {
368  getline(evi_stream, line);
369 
370  if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
371 
372  if (line.size() == 0) continue;
373 
374  cstr = new char[line.size() + 1];
375  strcpy(cstr, line.c_str());
376 
377  p = strtok(cstr, " ");
378  tmp = p;
379 
380  // if user input is wrong
381  NodeId node = -1;
382 
383  try {
384  node = credalNet_->current_bn().idFromName(tmp);
385  } catch (NotFound& err) {
386  GUM_SHOWERROR(err);
387  continue;
388  }
389 
390  auto dSize = credalNet_->current_bn().variable(node).domainSize();
391 
392  p = strtok(nullptr, " ");
393 
394  if (p == nullptr) {
395  query_.insert(node, std::vector< bool >(dSize, true));
396  } else {
397  std::vector< bool > values(dSize, false);
398 
399  while (p != nullptr) {
400  if ((Size)atoi(p) >= dSize)
401  GUM_ERROR(OutOfBounds,
402  "void InferenceEngine< GUM_SCALAR "
403  ">::insertQuery(const std::string & path) : "
404  "query modality is higher or equal to "
405  "cardinality");
406 
407  values[atoi(p)] = true;
408  p = strtok(nullptr, " ");
409  } // end of : line
410 
411  query_.insert(node, values);
412  }
413 
414  delete[] p;
415  delete[] cstr;
416  } // end of : file
417 
418  evi_stream.close();
419  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:60
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
query query_
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
virtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 60 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

60  {
61  return enabled_eps_;
62  }
bool enabled_eps_
If true, the threshold convergence is enabled.
+ Here is the call graph for this function:

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
virtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 111 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

111  {
112  return enabled_max_iter_;
113  }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
+ Here is the call graph for this function:

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
virtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 137 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

137  {
138  return enabled_max_time_;
139  }
bool enabled_max_time_
If true, the timeout is enabled.
+ Here is the call graph for this function:

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
virtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 89 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

89  {
90  return enabled_min_rate_eps_;
91  }
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the call graph for this function:

◆ makeInference()

template<typename GUM_SCALAR , class BNInferenceEngine >
virtual void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::makeInference ( )
pure virtual

To be redefined by each credal net algorithm.

Starts the inference.

Implements gum::credal::InferenceEngine< GUM_SCALAR >.

Implemented in gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

◆ marginalMax() [1/2]

template<typename GUM_SCALAR >
gum::Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId  id) const
inherited

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 446 of file inferenceEngine_tpl.h.

446  {
447  try {
448  Potential< GUM_SCALAR > res;
449  res.add(credalNet_->current_bn().variable(id));
450  res.fillWith(marginalMax_[id]);
451  return res;
452  } catch (NotFound& err) { throw(err); }
453  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi marginalMax_
Upper marginals.

◆ marginalMax() [2/2]

template<typename GUM_SCALAR >
INLINE Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string &  varName) const
inherited

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 428 of file inferenceEngine_tpl.h.

429  {
430  return marginalMax(credalNet_->current_bn().idFromName(varName));
431  }
Potential< GUM_SCALAR > marginalMax(const NodeId id) const
Get the upper marginals of a given node id.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.

◆ marginalMin() [1/2]

template<typename GUM_SCALAR >
gum::Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId  id) const
inherited

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 435 of file inferenceEngine_tpl.h.

435  {
436  try {
437  Potential< GUM_SCALAR > res;
438  res.add(credalNet_->current_bn().variable(id));
439  res.fillWith(marginalMin_[id]);
440  return res;
441  } catch (NotFound& err) { throw(err); }
442  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
margi marginalMin_
Lower marginals.

◆ marginalMin() [2/2]

template<typename GUM_SCALAR >
INLINE Potential< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string &  varName) const
inherited

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 422 of file inferenceEngine_tpl.h.

423  {
424  return marginalMin(credalNet_->current_bn().idFromName(varName));
425  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
Potential< GUM_SCALAR > marginalMin(const NodeId id) const
Get the lower marginals of a given node id.

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
virtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 101 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

101 { return max_iter_; }
Size max_iter_
The maximum iterations.
+ Here is the call graph for this function:

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
virtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 124 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

124 { return max_time_; }
double max_time_
The timeout.
+ Here is the call graph for this function:

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 39 of file IApproximationSchemeConfiguration_inl.h.

References gum::Set< Key, Alloc >::emplace().

39  {
40  std::stringstream s;
41 
42  switch (stateApproximationScheme()) {
44  s << "in progress";
45  break;
46 
48  s << "stopped with epsilon=" << epsilon();
49  break;
50 
52  s << "stopped with rate=" << minEpsilonRate();
53  break;
54 
56  s << "stopped with max iteration=" << maxIter();
57  break;
58 
60  s << "stopped with timeout=" << maxTime();
61  break;
62 
64  s << "stopped on request";
65  break;
66 
68  s << "undefined state";
69  break;
70  };
71 
72  return s.str();
73  }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double maxTime() const =0
Returns the timeout (in seconds).
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
+ Here is the call graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
virtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 73 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

73  {
74  return min_rate_eps_;
75  }
double min_rate_eps_
Threshold for the epsilon rate.
+ Here is the call graph for this function:

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
virtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 162 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

162  {
164  GUM_ERROR(OperationNotAllowed,
165  "state of the approximation scheme is undefined");
166  }
167 
168  return current_step_;
169  }
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
Size current_step_
The current step.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ optFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::optFusion_ ( )
protected

Fusion of threads optimal IBayesNet.

Definition at line 482 of file multipleInferenceEngine_tpl.h.

482  {
483  typedef std::vector< bool > dBN;
484 
485  Size nsize = Size(workingSet_[0]->size());
486 
487  // no parallel insert in hash-tables (OptBN)
488  for (Idx i = 0; i < nsize; i++) {
489  // we don't store anything for observed variables
490  if (infE__::evidence_.exists(i)) continue;
491 
492  Size dSize = Size(l_marginalMin_[0][i].size());
493 
494  for (Size j = 0; j < dSize; j++) {
495  // go through all threads
496  std::vector< Size > keymin(3);
497  keymin[0] = i;
498  keymin[1] = j;
499  keymin[2] = 0;
500  std::vector< Size > keymax(keymin);
501  keymax[2] = 1;
502 
503  Size tsize = Size(l_marginalMin_.size());
504 
505  for (Size tId = 0; tId < tsize; tId++) {
506  if (l_marginalMin_[tId][i][j] == this->marginalMin_[i][j]) {
507  const std::vector< dBN* >& tOpts
508  = l_optimalNet_[tId]->getBNOptsFromKey(keymin);
509  Size osize = Size(tOpts.size());
510 
511  for (Size bn = 0; bn < osize; bn++) {
512  infE__::dbnOpt_.insert(*tOpts[bn], keymin);
513  }
514  }
515 
516  if (l_marginalMax_[tId][i][j] == this->marginalMax_[i][j]) {
517  const std::vector< dBN* >& tOpts
518  = l_optimalNet_[tId]->getBNOptsFromKey(keymax);
519  Size osize = Size(tOpts.size());
520 
521  for (Size bn = 0; bn < osize; bn++) {
522  infE__::dbnOpt_.insert(*tOpts[bn], keymax);
523  }
524  }
525  } // end of : all threads
526  } // end of : all modalities
527  } // end of : all variables
528  }
margis__ l_marginalMin_
Threads lower marginals, one per thread.
margi evidence_
Holds observed variables states.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
virtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 148 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

148 { return period_size_; }
Size period_size_
Checking criteria frequency.
+ Here is the call graph for this function:

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( )
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 209 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

209  {
210  if (burn_in_ > current_step_) {
211  return burn_in_ - current_step_;
212  } else {
213  return 0;
214  }
215  }
Size burn_in_
Number of iterations before checking stopping criteria.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ repetitiveInd()

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const
inherited

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 120 of file inferenceEngine_tpl.h.

120  {
121  return repetitiveInd_;
122  }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.

◆ repetitiveInit_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInit_ ( )
protectedinherited

Initialize t0_ and t1_ clusters.

Definition at line 786 of file inferenceEngine_tpl.h.

786  {
787  timeSteps_ = 0;
788  t0_.clear();
789  t1_.clear();
790 
791  // t = 0 vars belongs to t0_ as keys
792  for (auto node: credalNet_->current_bn().dag().nodes()) {
793  std::string var_name = credalNet_->current_bn().variable(node).name();
794  auto delim = var_name.find_first_of("_");
795 
796  if (delim > var_name.size()) {
797  GUM_ERROR(InvalidArgument,
798  "void InferenceEngine< GUM_SCALAR "
799  ">::repetitiveInit_() : the network does not "
800  "appear to be dynamic");
801  }
802 
803  std::string time_step = var_name.substr(delim + 1, 1);
804 
805  if (time_step.compare("0") == 0) t0_.insert(node, std::vector< NodeId >());
806  }
807 
808  // t = 1 vars belongs to either t0_ as member value or t1_ as keys
809  for (const auto& node: credalNet_->current_bn().dag().nodes()) {
810  std::string var_name = credalNet_->current_bn().variable(node).name();
811  auto delim = var_name.find_first_of("_");
812  std::string time_step = var_name.substr(delim + 1, var_name.size());
813  var_name = var_name.substr(0, delim);
814  delim = time_step.find_first_of("_");
815  time_step = time_step.substr(0, delim);
816 
817  if (time_step.compare("1") == 0) {
818  bool found = false;
819 
820  for (const auto& elt: t0_) {
821  std::string var_0_name
822  = credalNet_->current_bn().variable(elt.first).name();
823  delim = var_0_name.find_first_of("_");
824  var_0_name = var_0_name.substr(0, delim);
825 
826  if (var_name.compare(var_0_name) == 0) {
827  const Potential< GUM_SCALAR >* potential(
828  &credalNet_->current_bn().cpt(node));
829  const Potential< GUM_SCALAR >* potential2(
830  &credalNet_->current_bn().cpt(elt.first));
831 
832  if (potential->domainSize() == potential2->domainSize())
833  t0_[elt.first].push_back(node);
834  else
835  t1_.insert(node, std::vector< NodeId >());
836 
837  found = true;
838  break;
839  }
840  }
841 
842  if (!found) { t1_.insert(node, std::vector< NodeId >()); }
843  }
844  }
845 
846  // t > 1 vars belongs to either t0_ or t1_ as member value
847  // remember timeSteps_
848  for (auto node: credalNet_->current_bn().dag().nodes()) {
849  std::string var_name = credalNet_->current_bn().variable(node).name();
850  auto delim = var_name.find_first_of("_");
851  std::string time_step = var_name.substr(delim + 1, var_name.size());
852  var_name = var_name.substr(0, delim);
853  delim = time_step.find_first_of("_");
854  time_step = time_step.substr(0, delim);
855 
856  if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
857  // keep max time_step
858  if (atoi(time_step.c_str()) > timeSteps_)
859  timeSteps_ = atoi(time_step.c_str());
860 
861  std::string var_0_name;
862  bool found = false;
863 
864  for (const auto& elt: t0_) {
865  std::string var_0_name
866  = credalNet_->current_bn().variable(elt.first).name();
867  delim = var_0_name.find_first_of("_");
868  var_0_name = var_0_name.substr(0, delim);
869 
870  if (var_name.compare(var_0_name) == 0) {
871  const Potential< GUM_SCALAR >* potential(
872  &credalNet_->current_bn().cpt(node));
873  const Potential< GUM_SCALAR >* potential2(
874  &credalNet_->current_bn().cpt(elt.first));
875 
876  if (potential->domainSize() == potential2->domainSize()) {
877  t0_[elt.first].push_back(node);
878  found = true;
879  break;
880  }
881  }
882  }
883 
884  if (!found) {
885  for (const auto& elt: t1_) {
886  std::string var_0_name
887  = credalNet_->current_bn().variable(elt.first).name();
888  auto delim = var_0_name.find_first_of("_");
889  var_0_name = var_0_name.substr(0, delim);
890 
891  if (var_name.compare(var_0_name) == 0) {
892  const Potential< GUM_SCALAR >* potential(
893  &credalNet_->current_bn().cpt(node));
894  const Potential< GUM_SCALAR >* potential2(
895  &credalNet_->current_bn().cpt(elt.first));
896 
897  if (potential->domainSize() == potential2->domainSize()) {
898  t1_[elt.first].push_back(node);
899  break;
900  }
901  }
902  }
903  }
904  }
905  }
906  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
cluster t1_
Clusters of nodes used with dynamic networks.
cluster t0_
Clusters of nodes used with dynamic networks.
void clear()
Removes all the elements in the hash table.
int timeSteps_
The number of time steps of this network (only usefull for dynamic networks).
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ saveExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string &  path) const
inherited

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 556 of file inferenceEngine_tpl.h.

557  {
558  if (dynamicExpMin_.empty()) //_modal.empty())
559  return;
560 
561  // else not here, to keep the const (natural with a saving process)
562  // else if(dynamicExpMin_.empty() || dynamicExpMax_.empty())
563  //_dynamicExpectations(); // works with or without a dynamic network
564 
565  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
566 
567  if (!m_stream.good()) {
568  GUM_ERROR(IOError,
569  "void InferenceEngine< GUM_SCALAR "
570  ">::saveExpectations(const std::string & path) : could "
571  "not open output file : "
572  << path);
573  }
574 
575  for (const auto& elt: dynamicExpMin_) {
576  m_stream << elt.first; // it->first;
577 
578  // iterates over a vector
579  for (const auto& elt2: elt.second) {
580  m_stream << " " << elt2;
581  }
582 
583  m_stream << std::endl;
584  }
585 
586  for (const auto& elt: dynamicExpMax_) {
587  m_stream << elt.first;
588 
589  // iterates over a vector
590  for (const auto& elt2: elt.second) {
591  m_stream << " " << elt2;
592  }
593 
594  m_stream << std::endl;
595  }
596 
597  m_stream.close();
598  }
dynExpe dynamicExpMin_
Lower dynamic expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ saveMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string &  path) const
inherited

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 530 of file inferenceEngine_tpl.h.

531  {
532  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
533 
534  if (!m_stream.good()) {
535  GUM_ERROR(IOError,
536  "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
537  "std::string & path) const : could not open output file "
538  ": "
539  << path);
540  }
541 
542  for (const auto& elt: marginalMin_) {
543  Size esize = Size(elt.second.size());
544 
545  for (Size mod = 0; mod < esize; mod++) {
546  m_stream << credalNet_->current_bn().variable(elt.first).name() << " "
547  << mod << " " << (elt.second)[mod] << " "
548  << marginalMax_[elt.first][mod] << std::endl;
549  }
550  }
551 
552  m_stream.close();
553  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ saveVertices()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string &  path) const
inherited

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 630 of file inferenceEngine_tpl.h.

630  {
631  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
632 
633  if (!m_stream.good()) {
634  GUM_ERROR(IOError,
635  "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
636  "std::string & path) : could not open outpul file : "
637  << path);
638  }
639 
640  for (const auto& elt: marginalSets_) {
641  m_stream << credalNet_->current_bn().variable(elt.first).name()
642  << std::endl;
643 
644  for (const auto& elt2: elt.second) {
645  m_stream << "[";
646  bool first = true;
647 
648  for (const auto& elt3: elt2) {
649  if (!first) {
650  m_stream << ",";
651  first = false;
652  }
653 
654  m_stream << elt3;
655  }
656 
657  m_stream << "]\n";
658  }
659  }
660 
661  m_stream.close();
662  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
credalSet marginalSets_
Credal sets vertices, if enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double  eps)
virtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfLowerBoundRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 42 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

42  {
43  if (eps < 0.) { GUM_ERROR(OutOfLowerBound, "eps should be >=0"); }
44 
45  eps_ = eps;
46  enabled_eps_ = true;
47  }
double eps_
Threshold for convergence.
bool enabled_eps_
If true, the threshold convergence is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size  max)
virtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfLowerBoundRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 94 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

94  {
95  if (max < 1) { GUM_ERROR(OutOfLowerBound, "max should be >=1"); }
96  max_iter_ = max;
97  enabled_max_iter_ = true;
98  }
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
Size max_iter_
The maximum iterations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double  timeout)
virtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfLowerBoundRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 117 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

117  {
118  if (timeout <= 0.) { GUM_ERROR(OutOfLowerBound, "timeout should be >0."); }
119  max_time_ = timeout;
120  enabled_max_time_ = true;
121  }
double max_time_
The timeout.
bool enabled_max_time_
If true, the timeout is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double  rate)
virtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfLowerBoundif rate<0

Implements gum::IApproximationSchemeConfiguration.

Definition at line 65 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

65  {
66  if (rate < 0) { GUM_ERROR(OutOfLowerBound, "rate should be >=0"); }
67 
68  min_rate_eps_ = rate;
69  enabled_min_rate_eps_ = true;
70  }
double min_rate_eps_
Threshold for the epsilon rate.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size  p)
virtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfLowerBoundRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 142 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

142  {
143  if (p < 1) { GUM_ERROR(OutOfLowerBound, "p should be >=1"); }
144 
145  period_size_ = p;
146  }
Size period_size_
Checking criteria frequency.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:54
+ Here is the call graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool  repetitive)
inherited
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 111 of file inferenceEngine_tpl.h.

111  {
112  bool oldValue = repetitiveInd_;
113  repetitiveInd_ = repetitive;
114 
115  // do not compute clusters more than once
116  if (repetitiveInd_ && !oldValue) repetitiveInit_();
117  }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.
void repetitiveInit_()
Initialize t0_ and t1_ clusters.

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool  v)
virtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 151 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

151 { verbosity_ = v; }
bool verbosity_
If true, verbosity is enabled.
+ Here is the call graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( )
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 196 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

196  {
197  if (current_step_ < burn_in_) { return false; }
198 
199  if (period_size_ == 1) { return true; }
200 
201  return ((current_step_ - burn_in_) % period_size_ == 0);
202  }
Size burn_in_
Number of iterations before checking stopping criteria.
Size period_size_
Checking criteria frequency.
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
virtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 157 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

157  {
158  return current_state_;
159  }
ApproximationSchemeSTATE current_state_
The current state.
+ Here is the call graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 218 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

+ Here is the call graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool  value)
inherited
Parameters
valueTrue if optimal Bayesian networks are to be stored for each variable and each modality.

Definition at line 99 of file inferenceEngine_tpl.h.

99  {
100  storeBNOpt_ = value;
101  }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
inherited
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 135 of file inferenceEngine_tpl.h.

135  {
136  return storeBNOpt_;
137  }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool  value)
inherited
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 104 of file inferenceEngine_tpl.h.

104  {
105  storeVertices_ = value;
106 
107  if (value) initMarginalSets_();
108  }
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
void initMarginalSets_()
Initialize credal set vertices with empty sets.

◆ storeVertices() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const
inherited

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations.int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 130 of file inferenceEngine_tpl.h.

130  {
131  return storeVertices_;
132  }
bool storeVertices_
True if credal sets vertices are stored, False otherwise.

◆ toString()

template<typename GUM_SCALAR >
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const
inherited

Print all nodes marginals to standart output.

Definition at line 601 of file inferenceEngine_tpl.h.

601  {
602  std::stringstream output;
603  output << std::endl;
604 
605  // use cbegin() when available
606  for (const auto& elt: marginalMin_) {
607  Size esize = Size(elt.second.size());
608 
609  for (Size mod = 0; mod < esize; mod++) {
610  output << "P(" << credalNet_->current_bn().variable(elt.first).name()
611  << "=" << mod << "|e) = [ ";
612  output << marginalMin_[elt.first][mod] << ", "
613  << marginalMax_[elt.first][mod] << " ]";
614 
615  if (!query_.empty())
616  if (query_.exists(elt.first) && query_[elt.first][mod])
617  output << " QUERY";
618 
619  output << std::endl;
620  }
621 
622  output << std::endl;
623  }
624 
625  return output.str();
626  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
query query_
Holds the query nodes states.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
bool empty() const noexcept
Indicates whether the hash table is empty.

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int  incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 205 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

205  {
206  current_step_ += incr;
207  }
Size current_step_
The current step.
+ Here is the call graph for this function:

◆ updateCredalSets_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::updateCredalSets_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 931 of file inferenceEngine_tpl.h.

934  {
935  auto& nodeCredalSet = marginalSets_[id];
936  auto dsize = vertex.size();
937 
938  bool eq = true;
939 
940  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend();
941  it != itEnd;
942  ++it) {
943  eq = true;
944 
945  for (Size i = 0; i < dsize; i++) {
946  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
947  eq = false;
948  break;
949  }
950  }
951 
952  if (eq) break;
953  }
954 
955  if (!eq || nodeCredalSet.size() == 0) {
956  nodeCredalSet.push_back(vertex);
957  return;
958  } else
959  return;
960 
961  // because of next lambda return condition
962  if (nodeCredalSet.size() == 1) return;
963 
964  // check that the point and all previously added ones are not inside the
965  // actual
966  // polytope
967  auto itEnd = std::remove_if(
968  nodeCredalSet.begin(),
969  nodeCredalSet.end(),
970  [&](const std::vector< GUM_SCALAR >& v) -> bool {
971  for (auto jt = v.cbegin(),
972  jtEnd = v.cend(),
973  minIt = marginalMin_[id].cbegin(),
974  minItEnd = marginalMin_[id].cend(),
975  maxIt = marginalMax_[id].cbegin(),
976  maxItEnd = marginalMax_[id].cend();
977  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
978  ++jt, ++minIt, ++maxIt) {
979  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
980  && std::fabs(*minIt - *maxIt) > 1e-6)
981  return false;
982  }
983  return true;
984  });
985 
986  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
987 
988  // we need at least 2 points to make a convex combination
989  if (!elimRedund || nodeCredalSet.size() <= 2) return;
990 
991  // there may be points not inside the polytope but on one of it's facet,
992  // meaning it's still a convex combination of vertices of this facet. Here
993  // we
994  // need lrs.
995  LRSWrapper< GUM_SCALAR > lrsWrapper;
996  lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
997 
998  for (const auto& vtx: nodeCredalSet)
999  lrsWrapper.fillV(vtx);
1000 
1001  lrsWrapper.elimRedundVrep();
1002 
1003  marginalSets_[id] = lrsWrapper.getOutput();
1004  }
credalSet marginalSets_
Credal sets vertices, if enabled.
const const_iterator & cend() const noexcept
Returns the unsafe const_iterator pointing to the end of the hashtable.
const_iterator cbegin() const
Returns an unsafe const_iterator pointing to the beginning of the hashtable.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.

◆ updateExpectations_()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::updateExpectations_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set

Definition at line 909 of file inferenceEngine_tpl.h.

911  {
912  std::string var_name = credalNet_->current_bn().variable(id).name();
913  auto delim = var_name.find_first_of("_");
914 
915  var_name = var_name.substr(0, delim);
916 
917  if (modal_.exists(var_name) /*modal_.find(var_name) != modal_.end()*/) {
918  GUM_SCALAR exp = 0;
919  auto vsize = vertex.size();
920 
921  for (Size mod = 0; mod < vsize; mod++)
922  exp += vertex[mod] * modal_[var_name][mod];
923 
924  if (exp > expectationMax_[id]) expectationMax_[id] = exp;
925 
926  if (exp < expectationMin_[id]) expectationMin_[id] = exp;
927  }
928  }
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

◆ updateMarginals_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateMarginals_ ( )
inlineprotected

Fusion of threads marginals.

Definition at line 273 of file multipleInferenceEngine_tpl.h.

273  {
274 #pragma omp parallel
275  {
276  int threadId = getThreadNumber();
277  long nsize = long(workingSet_[threadId]->size());
278 
279 #pragma omp for
280 
281  for (long i = 0; i < nsize; i++) {
282  Size dSize = Size(l_marginalMin_[threadId][i].size());
283 
284  for (Size j = 0; j < dSize; j++) {
285  Size tsize = Size(l_marginalMin_.size());
286 
287  // go through all threads
288  for (Size tId = 0; tId < tsize; tId++) {
289  if (l_marginalMin_[tId][i][j] < this->marginalMin_[i][j])
290  this->marginalMin_[i][j] = l_marginalMin_[tId][i][j];
291 
292  if (l_marginalMax_[tId][i][j] > this->marginalMax_[i][j])
293  this->marginalMax_[i][j] = l_marginalMax_[tId][i][j];
294  } // end of : all threads
295  } // end of : all modalities
296  } // end of : all variables
297  } // end of : parallel region
298  }
unsigned int getThreadNumber()
Get the calling thread id.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margi marginalMax_
Upper marginals.
margi marginalMin_
Lower marginals.
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ updateOldMarginals_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateOldMarginals_ ( )
protected

Update old marginals (from current marginals).

Call this once to initialize old marginals (after burn-in for example) and then use computeEpsilon_ which does the same job but compute epsilon too.

Definition at line 346 of file multipleInferenceEngine_tpl.h.

346  {
347 #pragma omp parallel
348  {
349  int threadId = getThreadNumber();
350  long nsize = long(workingSet_[threadId]->size());
351 
352 #pragma omp for
353 
354  for (long i = 0; i < nsize; i++) {
355  Size dSize = Size(l_marginalMin_[threadId][i].size());
356 
357  for (Size j = 0; j < dSize; j++) {
358  Size tsize = Size(l_marginalMin_.size());
359 
360  // go through all threads
361  for (Size tId = 0; tId < tsize; tId++) {
362  if (l_marginalMin_[tId][i][j] < this->oldMarginalMin_[i][j])
363  this->oldMarginalMin_[i][j] = l_marginalMin_[tId][i][j];
364 
365  if (l_marginalMax_[tId][i][j] > this->oldMarginalMax_[i][j])
366  this->oldMarginalMax_[i][j] = l_marginalMax_[tId][i][j];
367  } // end of : all threads
368  } // end of : all modalities
369  } // end of : all variables
370  } // end of : parallel region
371  }
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ updateThread_()

template<typename GUM_SCALAR , class BNInferenceEngine >
bool gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateThread_ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotected

Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id.

Parameters
idThe id of the node to be updated.
vertexThe vertex.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.
Returns
True if the IBayesNet is kept (for now), False otherwise.

Definition at line 88 of file multipleInferenceEngine_tpl.h.

91  {
92  int tId = getThreadNumber();
93 
94  // save E(X) if we don't save vertices
95  if (!infE__::storeVertices_ && !l_modal_[tId].empty()) {
96  std::string var_name = workingSet_[tId]->variable(id).name();
97  auto delim = var_name.find_first_of("_");
98  var_name = var_name.substr(0, delim);
99 
100  if (l_modal_[tId].exists(var_name)) {
101  GUM_SCALAR exp = 0;
102  Size vsize = Size(vertex.size());
103 
104  for (Size mod = 0; mod < vsize; mod++)
105  exp += vertex[mod] * l_modal_[tId][var_name][mod];
106 
107  if (exp > l_expectationMax_[tId][id]) l_expectationMax_[tId][id] = exp;
108 
109  if (exp < l_expectationMin_[tId][id]) l_expectationMin_[tId][id] = exp;
110  }
111  } // end of : if modal (map) not empty
112 
113  bool newOne = false;
114  bool added = false;
115  bool result = false;
116  // for burn in, we need to keep checking on local marginals and not global
117  // ones
118  // (faster inference)
119  // we also don't want to store dbn for observed variables since there will
120  // be a
121  // huge number of them (probably all of them).
122  Size vsize = Size(vertex.size());
123 
124  for (Size mod = 0; mod < vsize; mod++) {
125  if (vertex[mod] < l_marginalMin_[tId][id][mod]) {
126  l_marginalMin_[tId][id][mod] = vertex[mod];
127  newOne = true;
128 
129  if (infE__::storeBNOpt_ && !infE__::evidence_.exists(id)) {
130  std::vector< Size > key(3);
131  key[0] = id;
132  key[1] = mod;
133  key[2] = 0;
134 
135  if (l_optimalNet_[tId]->insert(key, true)) result = true;
136  }
137  }
138 
139  if (vertex[mod] > l_marginalMax_[tId][id][mod]) {
140  l_marginalMax_[tId][id][mod] = vertex[mod];
141  newOne = true;
142 
143  if (infE__::storeBNOpt_ && !infE__::evidence_.exists(id)) {
144  std::vector< Size > key(3);
145  key[0] = id;
146  key[1] = mod;
147  key[2] = 1;
148 
149  if (l_optimalNet_[tId]->insert(key, true)) result = true;
150  }
151  } else if (vertex[mod] == l_marginalMin_[tId][id][mod]
152  || vertex[mod] == l_marginalMax_[tId][id][mod]) {
153  newOne = true;
154 
155  if (infE__::storeBNOpt_ && vertex[mod] == l_marginalMin_[tId][id][mod]
156  && !infE__::evidence_.exists(id)) {
157  std::vector< Size > key(3);
158  key[0] = id;
159  key[1] = mod;
160  key[2] = 0;
161 
162  if (l_optimalNet_[tId]->insert(key, false)) result = true;
163  }
164 
165  if (infE__::storeBNOpt_ && vertex[mod] == l_marginalMax_[tId][id][mod]
166  && !infE__::evidence_.exists(id)) {
167  std::vector< Size > key(3);
168  key[0] = id;
169  key[1] = mod;
170  key[2] = 1;
171 
172  if (l_optimalNet_[tId]->insert(key, false)) result = true;
173  }
174  }
175 
176  // store point to compute credal set vertices.
177  // check for redundancy at each step or at the end ?
178  if (infE__::storeVertices_ && !added && newOne) {
179  updateThreadCredalSets__(id, vertex, elimRedund);
180  added = true;
181  }
182  }
183 
184  // if all variables didn't get better marginals, we will delete
185  if (infE__::storeBNOpt_ && result) return true;
186 
187  return false;
188  }
expes__ l_expectationMax_
Threads upper expectations, one per thread.
expes__ l_expectationMin_
Threads lower expectations, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
void updateThreadCredalSets__(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund)
Ask for redundancy elimination of a node credal set of a calling thread.
margi evidence_
Holds observed variables states.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> l_optimalNet_
Threads optimal IBayesNet.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ updateThreadCredalSets__()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateThreadCredalSets__ ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund 
)
inlineprivate

Ask for redundancy elimination of a node credal set of a calling thread.

Called by updateThread_ if vertices are stored.

Parameters
idA constant reference to the node id whose credal set is to be checked for redundancy.
vertexThe vertex to add to the credal set.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.

Definition at line 192 of file multipleInferenceEngine_tpl.h.

194  {
195  int tId = getThreadNumber();
196  auto& nodeCredalSet = l_marginalSets_[tId][id];
197  Size dsize = Size(vertex.size());
198 
199  bool eq = true;
200 
201  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend();
202  it != itEnd;
203  ++it) {
204  eq = true;
205 
206  for (Size i = 0; i < dsize; i++) {
207  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
208  eq = false;
209  break;
210  }
211  }
212 
213  if (eq) break;
214  }
215 
216  if (!eq || nodeCredalSet.size() == 0) {
217  nodeCredalSet.push_back(vertex);
218  return;
219  } else
220  return;
221 
225  if (nodeCredalSet.size() == 1) return;
226 
227  // check that the point and all previously added ones are not inside the
228  // actual
229  // polytope
230  auto itEnd = std::remove_if(
231  nodeCredalSet.begin(),
232  nodeCredalSet.end(),
233  [&](const std::vector< GUM_SCALAR >& v) -> bool {
234  for (auto jt = v.cbegin(),
235  jtEnd = v.cend(),
236  minIt = l_marginalMin_[tId][id].cbegin(),
237  minItEnd = l_marginalMin_[tId][id].cend(),
238  maxIt = l_marginalMax_[tId][id].cbegin(),
239  maxItEnd = l_marginalMax_[tId][id].cend();
240  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
241  ++jt, ++minIt, ++maxIt) {
242  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
243  && std::fabs(*minIt - *maxIt) > 1e-6)
244  return false;
245  }
246  return true;
247  });
248 
249  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
250 
251  // we need at least 2 points to make a convex combination
252  if (!elimRedund || nodeCredalSet.size() <= 2) return;
253 
254  // there may be points not inside the polytope but on one of it's facet,
255  // meaning it's still a convex combination of vertices of this facet. Here
256  // we
257  // need lrs.
258  Size setSize = Size(nodeCredalSet.size());
259 
260  LRSWrapper< GUM_SCALAR > lrsWrapper;
261  lrsWrapper.setUpV(dsize, setSize);
262 
263  for (const auto& vtx: nodeCredalSet)
264  lrsWrapper.fillV(vtx);
265 
266  lrsWrapper.elimRedundVrep();
267 
268  l_marginalSets_[tId][id] = lrsWrapper.getOutput();
269  }
credalSets__ l_marginalSets_
Threads vertices.
unsigned int getThreadNumber()
Get the calling thread id.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47
margis__ l_marginalMax_
Threads upper marginals, one per thread.

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
virtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 153 of file approximationScheme_inl.h.

References gum::Set< Key, Alloc >::emplace().

153 { return verbosity_; }
bool verbosity_
If true, verbosity is enabled.
+ Here is the call graph for this function:

◆ vertices()

template<typename GUM_SCALAR >
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId  id) const
inherited

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 525 of file inferenceEngine_tpl.h.

525  {
526  return marginalSets_[id];
527  }
credalSet marginalSets_
Credal sets vertices, if enabled.

◆ verticesFusion_()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::verticesFusion_ ( )
protected
Deprecated:
Fusion of threads vertices.

Definition at line 375 of file multipleInferenceEngine_tpl.h.

375  {
376  // don't create threads if there are no vertices saved
377  if (!infE__::storeVertices_) return;
378 
379 #pragma omp parallel
380  {
381  int threadId = getThreadNumber();
382  Size nsize = Size(workingSet_[threadId]->size());
383 
384 #pragma omp for
385 
386  for (long i = 0; i < long(nsize); i++) {
387  Size tsize = Size(l_marginalMin_.size());
388 
389  // go through all threads
390  for (long tId = 0; tId < long(tsize); tId++) {
391  auto& nodeThreadCredalSet = l_marginalSets_[tId][i];
392 
393  // for each vertex, if we are at any opt marginal, add it to the set
394  for (const auto& vtx: nodeThreadCredalSet) {
395  // we run redundancy elimination at each step
396  // because there could be 100000 threads and the set will be so
397  // huge
398  // ...
399  // BUT not if vertices are of dimension 2 ! opt check and equality
400  // should be enough
401  infE__::updateCredalSets_(i, vtx, (vtx.size() > 2) ? true : false);
402  } // end of : nodeThreadCredalSet
403  } // end of : all threads
404  } // end of : all variables
405  } // end of : parallel region
406  }
credalSets__ l_marginalSets_
Threads vertices.
void updateCredalSets_(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Given a node id and one of it&#39;s possible vertex, update it&#39;s credal set.
unsigned int getThreadNumber()
Get the calling thread id.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
margis__ l_marginalMin_
Threads lower marginals, one per thread.
std::vector< bnet__ *> workingSet_
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:47

Member Data Documentation

◆ burn_in_

Size gum::ApproximationScheme::burn_in_
protectedinherited

Number of iterations before checking stopping criteria.

Definition at line 413 of file approximationScheme.h.

◆ credalNet_

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::InferenceEngine< GUM_SCALAR >::credalNet_
protectedinherited

A pointer to the Credal Net used.

Definition at line 69 of file inferenceEngine.h.

◆ current_epsilon_

double gum::ApproximationScheme::current_epsilon_
protectedinherited

Current epsilon.

Definition at line 368 of file approximationScheme.h.

◆ current_rate_

double gum::ApproximationScheme::current_rate_
protectedinherited

Current rate.

Definition at line 374 of file approximationScheme.h.

◆ current_state_

ApproximationSchemeSTATE gum::ApproximationScheme::current_state_
protectedinherited

The current state.

Definition at line 383 of file approximationScheme.h.

◆ current_step_

Size gum::ApproximationScheme::current_step_
protectedinherited

The current step.

Definition at line 377 of file approximationScheme.h.

◆ dbnOpt_

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::dbnOpt_
protectedinherited

Object used to efficiently store optimal bayes net during inference, for some algorithms.

Definition at line 142 of file inferenceEngine.h.

◆ dynamicExpMax_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax_
protectedinherited

Upper dynamic expectations.

If the network if not dynamic it's content is the same as expectationMax_.

Definition at line 96 of file inferenceEngine.h.

◆ dynamicExpMin_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin_
protectedinherited

Lower dynamic expectations.

If the network is not dynamic it's content is the same as expectationMin_.

Definition at line 93 of file inferenceEngine.h.

◆ enabled_eps_

bool gum::ApproximationScheme::enabled_eps_
protectedinherited

If true, the threshold convergence is enabled.

Definition at line 392 of file approximationScheme.h.

◆ enabled_max_iter_

bool gum::ApproximationScheme::enabled_max_iter_
protectedinherited

If true, the maximum iterations stopping criterion is enabled.

Definition at line 410 of file approximationScheme.h.

◆ enabled_max_time_

bool gum::ApproximationScheme::enabled_max_time_
protectedinherited

If true, the timeout is enabled.

Definition at line 404 of file approximationScheme.h.

◆ enabled_min_rate_eps_

bool gum::ApproximationScheme::enabled_min_rate_eps_
protectedinherited

If true, the minimal threshold for epsilon rate is enabled.

Definition at line 398 of file approximationScheme.h.

◆ eps_

double gum::ApproximationScheme::eps_
protectedinherited

Threshold for convergence.

Definition at line 389 of file approximationScheme.h.

◆ evidence_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::evidence_
protectedinherited

Holds observed variables states.

Definition at line 102 of file inferenceEngine.h.

◆ expectationMax_

template<typename GUM_SCALAR >
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax_
protectedinherited

Upper expectations, if some variables modalities were inserted.

Definition at line 89 of file inferenceEngine.h.

◆ expectationMin_

template<typename GUM_SCALAR >
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin_
protectedinherited

Lower expectations, if some variables modalities were inserted.

Definition at line 86 of file inferenceEngine.h.

◆ history_

std::vector< double > gum::ApproximationScheme::history_
protectedinherited

The scheme history, used only if verbosity == true.

Definition at line 386 of file approximationScheme.h.

◆ l_clusters_

template<typename GUM_SCALAR , class BNInferenceEngine >
clusters__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_clusters_
protected

Threads clusters.

Definition at line 107 of file multipleInferenceEngine.h.

◆ l_evidence_

template<typename GUM_SCALAR , class BNInferenceEngine >
margis__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_evidence_
protected

Threads evidence.

Definition at line 105 of file multipleInferenceEngine.h.

◆ l_expectationMax_

template<typename GUM_SCALAR , class BNInferenceEngine >
expes__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMax_
protected

Threads upper expectations, one per thread.

Definition at line 99 of file multipleInferenceEngine.h.

◆ l_expectationMin_

template<typename GUM_SCALAR , class BNInferenceEngine >
expes__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMin_
protected

Threads lower expectations, one per thread.

Definition at line 97 of file multipleInferenceEngine.h.

◆ l_inferenceEngine_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< BNInferenceEngine* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_inferenceEngine_
protected

Threads BNInferenceEngine.

Definition at line 115 of file multipleInferenceEngine.h.

◆ l_marginalMax_

template<typename GUM_SCALAR , class BNInferenceEngine >
margis__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMax_
protected

Threads upper marginals, one per thread.

Definition at line 95 of file multipleInferenceEngine.h.

◆ l_marginalMin_

template<typename GUM_SCALAR , class BNInferenceEngine >
margis__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMin_
protected

Threads lower marginals, one per thread.

Definition at line 93 of file multipleInferenceEngine.h.

◆ l_marginalSets_

template<typename GUM_SCALAR , class BNInferenceEngine >
credalSets__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalSets_
protected

Threads vertices.

Definition at line 103 of file multipleInferenceEngine.h.

◆ l_modal_

template<typename GUM_SCALAR , class BNInferenceEngine >
modals__ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_modal_
protected

Threads modalities.

Definition at line 101 of file multipleInferenceEngine.h.

◆ l_optimalNet_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< VarMod2BNsMap< GUM_SCALAR >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_optimalNet_
protected

Threads optimal IBayesNet.

Definition at line 117 of file multipleInferenceEngine.h.

◆ last_epsilon_

double gum::ApproximationScheme::last_epsilon_
protectedinherited

Last epsilon value.

Definition at line 371 of file approximationScheme.h.

◆ marginalMax_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_
protectedinherited

Upper marginals.

Definition at line 79 of file inferenceEngine.h.

◆ marginalMin_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_
protectedinherited

Lower marginals.

Definition at line 77 of file inferenceEngine.h.

◆ marginalSets_

template<typename GUM_SCALAR >
credalSet gum::credal::InferenceEngine< GUM_SCALAR >::marginalSets_
protectedinherited

Credal sets vertices, if enabled.

Definition at line 82 of file inferenceEngine.h.

◆ max_iter_

Size gum::ApproximationScheme::max_iter_
protectedinherited

The maximum iterations.

Definition at line 407 of file approximationScheme.h.

◆ max_time_

double gum::ApproximationScheme::max_time_
protectedinherited

The timeout.

Definition at line 401 of file approximationScheme.h.

◆ min_rate_eps_

double gum::ApproximationScheme::min_rate_eps_
protectedinherited

Threshold for the epsilon rate.

Definition at line 395 of file approximationScheme.h.

◆ modal_

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::modal_
protectedinherited

Variables modalities used to compute expectations.

Definition at line 99 of file inferenceEngine.h.

◆ oldMarginalMax_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMax_
protectedinherited

Old upper marginals used to compute epsilon.

Definition at line 74 of file inferenceEngine.h.

◆ oldMarginalMin_

template<typename GUM_SCALAR >
margi gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMin_
protectedinherited

Old lower marginals used to compute epsilon.

Definition at line 72 of file inferenceEngine.h.

◆ onProgress

Signaler3< Size, double, double > gum::IApproximationSchemeConfiguration::onProgress
inherited

Progression, error and time.

Definition at line 58 of file IApproximationSchemeConfiguration.h.

◆ onStop

Signaler1< std::string > gum::IApproximationSchemeConfiguration::onStop
inherited

Criteria messageApproximationScheme.

Definition at line 61 of file IApproximationSchemeConfiguration.h.

◆ period_size_

Size gum::ApproximationScheme::period_size_
protectedinherited

Checking criteria frequency.

Definition at line 416 of file approximationScheme.h.

◆ query_

template<typename GUM_SCALAR >
query gum::credal::InferenceEngine< GUM_SCALAR >::query_
protectedinherited

Holds the query nodes states.

Definition at line 104 of file inferenceEngine.h.

◆ repetitiveInd_

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd_
protectedinherited

True if using repetitive independence ( dynamic network only ), False otherwise.

False by default.

Definition at line 128 of file inferenceEngine.h.

◆ storeBNOpt_

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt_
protectedinherited

Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

The algorithms stops if no changes occured within 1000 iterations by default. int iterStop_; True is optimal bayes net are stored, for each variable and each modality, False otherwise. Not all algorithms offers this option. False by default.

Definition at line 138 of file inferenceEngine.h.

◆ storeVertices_

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_
protectedinherited

True if credal sets vertices are stored, False otherwise.

False by default.

Definition at line 124 of file inferenceEngine.h.

◆ t0_

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t0_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t0_ is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 112 of file inferenceEngine.h.

◆ t1_

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t1_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t1_ is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 119 of file inferenceEngine.h.

◆ timer_

Timer gum::ApproximationScheme::timer_
protectedinherited

The timer.

Definition at line 380 of file approximationScheme.h.

◆ timeSteps_

template<typename GUM_SCALAR >
int gum::credal::InferenceEngine< GUM_SCALAR >::timeSteps_
protectedinherited

The number of time steps of this network (only usefull for dynamic networks).

Deprecated:

Definition at line 149 of file inferenceEngine.h.

◆ verbosity_

bool gum::ApproximationScheme::verbosity_
protectedinherited

If true, verbosity is enabled.

Definition at line 419 of file approximationScheme.h.

◆ workingSet_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< bnet__* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSet_
protected

Threads IBayesNet.

Definition at line 110 of file multipleInferenceEngine.h.

◆ workingSetE_

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< List< const Potential< GUM_SCALAR >* >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSetE_
protected

Threads evidence.

Definition at line 112 of file multipleInferenceEngine.h.


The documentation for this class was generated from the following files: