PeerJ Computer Science Preprints: Theory and Formal Methodshttps://peerj.com/preprints/index.atom?journal=cs&subject=11800Theory and Formal Methods articles published in PeerJ Computer Science PreprintsLinear time-varying Luenberger observer applied to diabeteshttps://peerj.com/preprints/33412017-10-122017-10-12Onofre Orozco LópezCarlos Eduardo Castañeda HernándezAgustín Rodríguez HerreroGema García SaézMaría Elena Hernando
We present a linear time-varying Luenberger observer (LTVLO) using compartmental models to estimate the unmeasurable states in patients with type 1 diabetes. The LTVLO proposed is based on the linearization in an operation point of the virtual patient (VP), where a linear time-varying system is obtained. LTVLO gains are obtained by selection of the asymptotic eigenvalues where the observability matrix is assured. The estimation of the unmeasurable variables is done using Ackermann's methodology. Additionally, it is shown the Lyapunov approach to prove the stability of the time-varying proposal. In order to evaluate the proposed methodology, we designed three experiments: A) VP obtained with the Bergman minimal model; B) VP obtained with the compartmental model presented by Hovorka in 2004; and C) real patients data set. For experiments A) and B), it is applied a meal plan to the VP, where the dynamic response of each state model is compared to the response of each variable of the time-varying observer. Once the observer is evaluated in experiment B), the proposal is applied to experiment C) with data extracted from real patients and the unmeasurable state space variables are obtained with the LTVLO. LTVLO methodology has the feature of being updated each instant of time to estimate the states under a known structure. The results are obtained using simulation with MatlabTM and SimulinkTM. The LTVLO estimates the unmeasurable states from in silico patients with high accuracy by means of the update of Luenberger gains at each iteration. The accuracy of the estimated state space variables is validated through fit parameter.
We present a linear time-varying Luenberger observer (LTVLO) using compartmental models to estimate the unmeasurable states in patients with type 1 diabetes. The LTVLO proposed is based on the linearization in an operation point of the virtual patient (VP), where a linear time-varying system is obtained. LTVLO gains are obtained by selection of the asymptotic eigenvalues where the observability matrix is assured. The estimation of the unmeasurable variables is done using Ackermann's methodology. Additionally, it is shown the Lyapunov approach to prove the stability of the time-varying proposal. In order to evaluate the proposed methodology, we designed three experiments: A) VP obtained with the Bergman minimal model; B) VP obtained with the compartmental model presented by Hovorka in 2004; and C) real patients data set. For experiments A) and B), it is applied a meal plan to the VP, where the dynamic response of each state model is compared to the response of each variable of the time-varying observer. Once the observer is evaluated in experiment B), the proposal is applied to experiment C) with data extracted from real patients and the unmeasurable state space variables are obtained with the LTVLO. LTVLO methodology has the feature of being updated each instant of time to estimate the states under a known structure. The results are obtained using simulation with MatlabTM and SimulinkTM. The LTVLO estimates the unmeasurable states from in silico patients with high accuracy by means of the update of Luenberger gains at each iteration. The accuracy of the estimated state space variables is validated through fit parameter.Weighted growth functions of automatic groupshttps://peerj.com/preprints/32562017-09-152017-09-15Mikael Vejdemo-Johansson
The growth function is the generating function for sizes of spheres around the identity in Cayley graphs of groups. We present a novel method to calculate growth functions for automatic groups with normal form recognizing automata that recognize a single normal form for each group element, and are at most context free in complexity: context free grammars can be translated into algebraic systems of equations, whose solutions represent generating functions of their corresponding non-terminal symbols.
This approach allows us to seamlessly introduce weightings on the growth function: assign different or even distinct weights to each of the generators in an underlying presentation, such that this weighting is reflected in the growth function. We recover known growth functions for small braid groups, and calculate growth functions that weight each generator in an automatic presentation of the braid groups according to their lengths in braid generators.
The growth function is the generating function for sizes of spheres around the identity in Cayley graphs of groups. We present a novel method to calculate growth functions for automatic groups with normal form recognizing automata that recognize a single normal form for each group element, and are at most context free in complexity: context free grammars can be translated into algebraic systems of equations, whose solutions represent generating functions of their corresponding non-terminal symbols.This approach allows us to seamlessly introduce weightings on the growth function: assign different or even distinct weights to each of the generators in an underlying presentation, such that this weighting is reflected in the growth function. We recover known growth functions for small braid groups, and calculate growth functions that weight each generator in an automatic presentation of the braid groups according to their lengths in braid generators.An artificial immune system approach to automated program verification: Towards a theory of undecidability in biological computinghttps://peerj.com/preprints/26902017-08-172017-08-17Soumya Banerjee
An immune system inspired Artificial Immune System (AIS) algorithm is presented, and is used for the purposes of automated program verification. Relevant immunological concepts are discussed and the field of AIS is briefly reviewed. It is proposed to use this AIS algorithm for a specific automated program verification task: that of predicting shape of program invariants. It is shown that the algorithm correctly predicts program invariant shape for a variety of benchmarked programs. Program invariants encapsulate the computability of a particular program, e.g. whether it performs a particular function correctly and whether it terminates or not. This work also lays the foundation for applying concepts of theoretical incomputability and undecidability to biological systems like the immune system that perform robust computation to eliminate pathogens.
An immune system inspired Artificial Immune System (AIS) algorithm is presented, and is used for the purposes of automated program verification. Relevant immunological concepts are discussed and the field of AIS is briefly reviewed. It is proposed to use this AIS algorithm for a specific automated program verification task: that of predicting shape of program invariants. It is shown that the algorithm correctly predicts program invariant shape for a variety of benchmarked programs. Program invariants encapsulate the computability of a particular program, e.g. whether it performs a particular function correctly and whether it terminates or not. This work also lays the foundation for applying concepts of theoretical incomputability and undecidability to biological systems like the immune system that perform robust computation to eliminate pathogens.Elementary cellular automata as conditional Boolean formulæhttps://peerj.com/preprints/25532016-10-242016-10-24Trace Fleeman y Garcia
I show that any elementary cellular automata -- a class of 1-dimensional, 2-state cellular automata originally formulated by Stephen Wolfram -- can be deconstructed into a set of two Boolean operators; I also present a conjecture concerning the relationship between the set of computationally complete elementary cellular automata rules (such as rule 110, shown in section 2 to be composed of a NAND gate) and the set of elementary cellular automata rules that contain universal Boolean operators (such as rule 52, shown in section 3.1 to contain a universal Boolean operator yet has not been shown as of 2016 to be computationally complete.)
I show that any elementary cellular automata -- a class of 1-dimensional, 2-state cellular automata originally formulated by Stephen Wolfram -- can be deconstructed into a set of two Boolean operators; I also present a conjecture concerning the relationship between the set of computationally complete elementary cellular automata rules (such as rule 110, shown in section 2 to be composed of a NAND gate) and the set of elementary cellular automata rules that contain universal Boolean operators (such as rule 52, shown in section 3.1 to contain a universal Boolean operator yet has not been shown as of 2016 to be computationally complete.)Matlab code for the Discrete Hankel Transformhttps://peerj.com/preprints/22162016-07-022016-07-02Ugo ChouinardNatalie Baddour
Previous definitions of a Discrete Hankel Transform (DHT) have focused on methods to approximate the continuous Hankel integral transform without regard for the properties of the DHT itself. Recently, the theory of a Discrete Hankel Transform was proposed that follows the same path as the Discrete Fourier/Continuous Fourier transform. This DHT possesses orthogonality properties which lead to invertibility and also possesses the standard set of discrete shift, modulation, multiplication and convolution rules. The proposed DHT can be used to approximate the continuous forward and inverse Hankel transform. This paper describes the Matlab code developed for the numerical calculation of this DHT.
Previous definitions of a Discrete Hankel Transform (DHT) have focused on methods to approximate the continuous Hankel integral transform without regard for the properties of the DHT itself. Recently, the theory of a Discrete Hankel Transform was proposed that follows the same path as the Discrete Fourier/Continuous Fourier transform. This DHT possesses orthogonality properties which lead to invertibility and also possesses the standard set of discrete shift, modulation, multiplication and convolution rules. The proposed DHT can be used to approximate the continuous forward and inverse Hankel transform. This paper describes the Matlab code developed for the numerical calculation of this DHT.Properties of distance spaces with power triangle inequalitieshttps://peerj.com/preprints/20552016-05-202016-05-20Daniel J Greenhoe
Metric spaces provide a framework for analysis and have several very useful properties. Many of these properties follow in part from the triangle inequality. However, there are several applications in which the triangle inequality does not hold but in which we may still like to perform analysis. This paper investigates what happens if the triangle inequality is removed all together, leaving what is called a distance space, and also what happens if the triangle inequality is replaced with a much more general two parameter relation, which is herein called the "power triangle inequality". The power triangle inequality represents an uncountably large class of inequalities, and includes the triangle inequality, relaxed triangle inequality, and inframetric inequality as special cases. The power triangle inequality is defined in terms of a function that is herein called the power triangle function. The power triangle function is itself a power mean, and as such is continuous and monotone with respect to its exponential parameter, and also includes the operations of maximum, minimum, mean square, arithmetic mean, geometric mean, and harmonic mean as special cases.
Metric spaces provide a framework for analysis and have several very useful properties. Many of these properties follow in part from the triangle inequality. However, there are several applications in which the triangle inequality does not hold but in which we may still like to perform analysis. This paper investigates what happens if the triangle inequality is removed all together, leaving what is called a distance space, and also what happens if the triangle inequality is replaced with a much more general two parameter relation, which is herein called the "power triangle inequality". The power triangle inequality represents an uncountably large class of inequalities, and includes the triangle inequality, relaxed triangle inequality, and inframetric inequality as special cases. The power triangle inequality is defined in terms of a function that is herein called the power triangle function. The power triangle function is itself a power mean, and as such is continuous and monotone with respect to its exponential parameter, and also includes the operations of maximum, minimum, mean square, arithmetic mean, geometric mean, and harmonic mean as special cases.Geometric model of image formation in Scheimpflug camerashttps://peerj.com/preprints/18872016-03-222016-03-22Indranil SinharoyPrasanna RangarajanMarc P. Christensen
We present a geometric model of image formation in Scheimpflug cameras that is most general. Scheimpflug imaging is commonly used is scientific and medical imaging either to increase the depth of field of the imager or to focus on tilted object surfaces. Existing Scheimpflug imaging models do not take into account the effect of pupil magnification (i.e. the ratio of the exit pupil diameter to the entrance pupil diameter), which we have found to affect the type of distortions experienced by the image-field upon lens rotations. In this work, we have also derived the relationship between the object, lens and sensor planes in Scheimpflug configuration, which is very similar in form with the standard Gaussian imaging equation, but applicable for imaging systems in which the lens plane and the sensor plane are arbitrarily oriented with respect to each other. Since the conventional rigid camera, in which the sensor and lens planes are constrained to be parallel to each other, is a special case of the Scheimpflug camera, our model also applies to imaging with conventional cameras.
We present a geometric model of image formation in Scheimpflug cameras that is most general. Scheimpflug imaging is commonly used is scientific and medical imaging either to increase the depth of field of the imager or to focus on tilted object surfaces. Existing Scheimpflug imaging models do not take into account the effect of pupil magnification (i.e. the ratio of the exit pupil diameter to the entrance pupil diameter), which we have found to affect the type of distortions experienced by the image-field upon lens rotations. In this work, we have also derived the relationship between the object, lens and sensor planes in Scheimpflug configuration, which is very similar in form with the standard Gaussian imaging equation, but applicable for imaging systems in which the lens plane and the sensor plane are arbitrarily oriented with respect to each other. Since the conventional rigid camera, in which the sensor and lens planes are constrained to be parallel to each other, is a special case of the Scheimpflug camera, our model also applies to imaging with conventional cameras.Canonical instabilities of autonomous vehicle systemshttps://peerj.com/preprints/17142016-02-062016-02-06Rodrick Wallace
Formal argument suggests that command, communication and control systems can remain stable in the sense of the Data Rate Theorem that mandates the minimum rate of control information required to stabilize inherently unstable 'plants', but may nonetheless, under fog-of-war demands, collapse into dysfunctional modes at variance with their fundamental mission. We apply the theory to autonomous ground vehicles under intelligent traffic control in which swarms of interacting, self-driving devices are inherently unstable as a consequence of the basic irregularity of the road network. It appears that such 'V2V/V2I' systems will experience large-scale failures analogous to the vast propagating fronts of power network blackouts, and possibly less benign, but more subtle patterns of `psychopathology' at various scales.
Formal argument suggests that command, communication and control systems can remain stable in the sense of the Data Rate Theorem that mandates the minimum rate of control information required to stabilize inherently unstable 'plants', but may nonetheless, under fog-of-war demands, collapse into dysfunctional modes at variance with their fundamental mission. We apply the theory to autonomous ground vehicles under intelligent traffic control in which swarms of interacting, self-driving devices are inherently unstable as a consequence of the basic irregularity of the road network. It appears that such 'V2V/V2I' systems will experience large-scale failures analogous to the vast propagating fronts of power network blackouts, and possibly less benign, but more subtle patterns of `psychopathology' at various scales.Similarity to a single sethttps://peerj.com/preprints/17132016-02-052016-02-05Lee Naish
Identifying patterns and associations in data is fundamental to discovery in science. This work investigates a very simple instance of the problem, where each data point consists of a vector of binary attributes, and attributes are treated equally. For example, each data point may correspond to a person and the attributes may be their sex, whether they smoke cigarettes, whether they have been diagnosed with lung cancer, etc. Measuring similarity of attributes in the data is equivalent to measuring similarity of sets - an attribute can be mapped to the set of data points which have the attribute. Furthermore, there is one identified base set (or attribute) and only similarity to that set is considered - the other sets are just ranked according to how similar they are to the base set. For example, if the base set is lung cancer sufferers, the set of smokers may well be high in the ranking. Identifying set similarity or correlation has many uses and is often the first step in determining causality. Set similarity is also the basis for comparing binary classifiers such as diagnostic tests for any data set. More than a hundred set similarity measures have been proposed in the literature is but there is very little understanding of how best to choose a similarity measure for a given domain. This work discusses numerous properties that similarity measures can have, weakening some previously proposed definitions so they are no longer incompatible, and identifying important forms of symmetry which have not previously been considered. It defines ordering relations over similarity measures and shows how some properties of a domain can be used to help choose a similarity measure which will perform well for that domain.
Identifying patterns and associations in data is fundamental to discovery in science. This work investigates a very simple instance of the problem, where each data point consists of a vector of binary attributes, and attributes are treated equally. For example, each data point may correspond to a person and the attributes may be their sex, whether they smoke cigarettes, whether they have been diagnosed with lung cancer, etc. Measuring similarity of attributes in the data is equivalent to measuring similarity of sets - an attribute can be mapped to the set of data points which have the attribute. Furthermore, there is one identified base set (or attribute) and only similarity to that set is considered - the other sets are just ranked according to how similar they are to the base set. For example, if the base set is lung cancer sufferers, the set of smokers may well be high in the ranking. Identifying set similarity or correlation has many uses and is often the first step in determining causality. Set similarity is also the basis for comparing binary classifiers such as diagnostic tests for any data set. More than a hundred set similarity measures have been proposed in the literature is but there is very little understanding of how best to choose a similarity measure for a given domain. This work discusses numerous properties that similarity measures can have, weakening some previously proposed definitions so they are no longer incompatible, and identifying important forms of symmetry which have not previously been considered. It defines ordering relations over similarity measures and shows how some properties of a domain can be used to help choose a similarity measure which will perform well for that domain.Testing a new idea to solve the P = NP problem with mathematical inductionhttps://peerj.com/preprints/14552015-10-272015-10-27Yubin Huang
Background. P and NP are two classes (sets) of languages in Computer Science. An open problem is whether P = NP. This paper tests a new idea to compare the two language sets and attempts to prove that these two language sets consist of same languages by elementary mathematical methods and basic knowledge of Turing machine. Methods. By introducing a filter function C(M,w) that is the number of configurations which have more than one children (nondeterministic moves) in the shortest accept computation path of a nondeterministic Turing machine M for input w, for any language L(M) ∈ NP, we can define a series of its subsets, Li(M) = {w | w ∈ L(M) ∧ C(M,w) ≤ i}, and a series of the subsets of NP as Li = {Li(M) | ∀M ∙ L(M) ∈ NP}. The nondeterministic multi-tape Turing machine is used to bridge two language sets Li and Li+1, by simulating the (i+1)-th nondeterministic move deterministically in multiple work tapes, to reduce one (the last) nondeterministic move. Results. The main result is that, with the above methods, the language set Li+1, which seems more powerful, can be proved to be a subset of Li. This result collapses Li ⊆ P for all i ∈ N. With NP = ⋃i∈NLi, it is clear that NP ⊆ P. Because by definition P ⊆ NP, we have P = NP. Discussion. There can be other ways to define the subsets Li and prove the same result. The result can be extended to cover any sets of time functions C, if ∀f ∙ f ∈ C ⇒ f2 ∈ C, then DTIME(C) = NTIME(C). This paper does not show any ways to find a solution in P for the problem known in NP.
Background. P and NP are two classes (sets) of languages in Computer Science. An open problem is whether P = NP. This paper tests a new idea to compare the two language sets and attempts to prove that these two language sets consist of same languages by elementary mathematical methods and basic knowledge of Turing machine. Methods. By introducing a filter function C(M,w) that is the number of configurations which have more than one children (nondeterministic moves) in the shortest accept computation path of a nondeterministic Turing machine M for input w, for any language L(M) ∈ NP, we can define a series of its subsets, Li(M) = {w | w ∈ L(M) ∧ C(M,w) ≤ i}, and a series of the subsets of NP as Li = {Li(M) | ∀M ∙ L(M) ∈ NP}. The nondeterministic multi-tape Turing machine is used to bridge two language sets Li and Li+1, by simulating the (i+1)-th nondeterministic move deterministically in multiple work tapes, to reduce one (the last) nondeterministic move. Results. The main result is that, with the above methods, the language set Li+1, which seems more powerful, can be proved to be a subset of Li. This result collapses Li ⊆ P for all i ∈ N. With NP = ⋃i∈NLi, it is clear that NP ⊆ P. Because by definition P ⊆ NP, we have P = NP. Discussion. There can be other ways to define the subsets Li and prove the same result. The result can be extended to cover any sets of time functions C, if ∀f ∙ f ∈ C ⇒ f2 ∈ C, then DTIME(C) = NTIME(C). This paper does not show any ways to find a solution in P for the problem known in NP.