nd the CNN approach the approach of O(n2?n)1; the convergence of run times is contraryto this, suggesting the implementation found within the pomegranate package may not be anentirely faithful reproduction of the algorithm originally described by Chow & Liu (1968)The standard deviation outcomes are predictably lower for the CNN model (with values hov-ering at 0.04s) given the linear relationship between the number of edges and the number of(approximately) constant time predictions of the CNN. While this certainty is of little con-sequence given the Chow-Liu algorithms performance (with standard deviation hovering at c.0.11s), it may bear relevance in scenarios where run time certainty is of greater import.Further improvements could be realised by employing by employing GPU acceleration, how-ever, on the basis of the pomegranate package’s limited GPU support this was not deemed anentirely fair comparison and thus was not pursued.5.3 Mechanism DiscussionWhen applied to image recognition a typical CNN searches for the presence of edges within theimage via convolutions and pooling, progressively building a more sophisticated representationof patterns that exist within the data. Thus in order to perform the edge prediction task asimilar mechanism is likely to be in play. Indeed a plausible mechanism of action for the CNNapproach is the detection of correlating patterns within the dataset’s dimensions, where theintensity of a row in a CNN filter’s grid may indicate whether an edge is present in the BN.The CNN approach also raises an interesting dichotomy: is the neural network circumvent-ing the problem and learning underlying synthetic data creation artefacts and, if it is, doesthis in fact matter? Furthermore, if this circumvention is taking place, should it be considered’cheating’ ? An optimistic view of this circumstance is that it can be viewed as a potentialasset. Provided the end use case targets networks that arise from a single system (for exampledisease models in the human body), this ‘cheating’ may, in fact, be an asset. That is the CNNwould be modelling the behaviour of the underlying source system, rather than the surrogateoutcomes presented to it within the variables.5.4 Relation to Generalised BN Structure LearningThe cold start nature of the existing structure learning algorithms is a double-edged sword inthat it ensures a degree of generalisation regardless of the data set, but can never learn fromthe dataset without adjusting its assumptions or heuristics manually. In contrast, the CNNapproach avoids the cold start problem, but will only do so effectively when the underlyingCNN has been trained on a representative sample of data. Thus a quandary is presented: aninevitable byproduct of the super-exponential search space issue is the fact that curating ap-propriate training data, capable of generalisation to non-synthetic and unseen network typesmay require a super-exponential supply of labelled training data.Furthermore, the of the validity of the comparison with the Chow Liu algorithm can be debated,given the synthetic data creation approach creates a BN structure that may be a tree, but is notmandated. As such one can argue that the comparison of the CNN approach with the ChowLiu algorithm on the pre-specified synthetic data is not a truly fair comparison. However, asall BN structure learning algorithms employ heuristics and assumptions that may not hold truethe partial violation of the Chow-Liu tree assumption is not deemed to invalidate results.In conclusion, while the CNN approach demonstrates promise, the success of the results shouldbe considered qualified at best.