We found 121 documents containing the name of Carrick (as described in the methods) somewhere in the list of authors obtained from PubMed (N = 42), Scopus (N = 46), and ResearchGate (N = 110). In addition, five articles were found from other sources. Many documents were identified in all three sources but ResearchGate contained more titles than the others. For detailed information, please see the flow chart (Fig.1) and Additional file 1.
Topics of publications
These documents were classified into the following topics based on titles and abstracts. ‘Brain’ (N = 70), ‘Posture or/and Balance’ (N = 24), ‘Other Functional Neurology ‘(N=6), and ‘Other Non-Functional Neurology’ (N=21) topics. Some titles (identical or near-identical) appeared several times (N=5), e.g. both as an article and as an abstract for some sort of ‘event’. For a list of documents by topic see Additional file 1.
Proportion of articles that are research studies, case studies, abstracts and conference papers
The number and types of research documents have been shown in Additional file 2a (full text n = 53) and 2b (non-full text n = 68), separating those that we could obtain as full text articles from those that we identified only as abstract texts (Fig. 2). Among the full text articles, the two most common topics were ‘Brain’ (N = 20) and ‘Other Non-Functional Neurology’ (N = 19); mostly classified by us as being clinical studies. In the non-full texts, the topic ‘Brain’ was the most common (N = 50), reported mainly as single case reports or as case series without an experimental design.
Studies purporting to study effect/benefit of treatment/intervention
Fourteen full text research articles appeared to deal with effect or benefit of a treatment or intervention using elements of Functional Neurology. They were selected because they used words such as “effect”, “changes”, “beneficial impact” (Table 1).
Studies on effect/benefit with a control group
Thus, fourteen articles appeared to have studied the effect/benefit of treatment/intervention. However, six [22,23,24,25,26,27] of these were removed, because there were no control groups and could therefore only report on ‘outcome’, which may or may not have been caused by the treatment/intervention. Another article [28] was later removed, because, on further scrutiny, it was found not to be appropriate for this review. In fact, this was a pilot study to investigate if a method to measure posture was robust enough to use in different data collection settings and to allow pooling of data.
Methodological assessment of seven studies on effect/benefit with a control group
Description
The remaining seven articles are described in Table 2. In brief, they reported on the treatment of several conditions, namely: abnormal brain function, autism spectrum disorder, and stroke. They also studied balance and posture.
The treatment/intervention consisted of various sounds [29], manipulation (cervical or extremity) [7, 21], or ocular movements [30]. For example, a computer-based auditory software program, the Mente Autism Device, was used to treat autism spectrum disorders [31], and eye movements were used to treat acute middle cerebral artery ischemic stroke [30]. Posture and balance were influenced during body rotation [20] and also, with different types of music [32]. Brain function was evaluated using the “blind spot” and stimulated using cervical manipulation [7].
As can be seen in Table 2, col. 7, ethics approval was mainly given by “own institutional review board” [33], which we interpreted as the Carrick Institute. Two of the trials had been registered in a trials register (ClinicalTrials.gov) (Table 2, col. 7, 2nd and 7th rows). Conflict of interest was reported in four studies (Table 2, col.8). The Carrick Institute was reported to have funded three studies (Table 2, col.8, rows 2, 3 and 7).
Quality assessment
These remaining seven full text articles (Table 3) lacked important aspects of scientific rigor, with scores ranging from 1.5 / 7 (21%) to 4/7 (57%). Thus, five of these seven articles scored between 21 and 43% and the remaining two scored 50 and 57%, respectively (Table 3). Typically, subjects were not blind or naïve to treatment, some outcome measurements were not stated to be reliable nor reproducible, and usually the assessor and statistician were not reported to be blind.
For example, the study scoring 21% (1.5/7) had reported that there was random allocation but did not state if it was concealed. The only other fulfilled quality item of the required seven was ‘interventions well described’. The study that scored 57% (4/7) failed to report if study subjects were naïve to treatment, if there was a random allocation into treatment groups, and if the person analysing the data was blind to treatment group. A brief description of each article is provided below, with articles sorted in descending order by methodological quality score.
“Changes in Brain Function after Manipulation of the Cervical Spine” [7]
This article from 1997 [7] (score 4/7; 57%) is the first scientific report on FN published by FRC, therefore possibly considered as the original scientific basis of FN. In this article, FRC tested the hypothesis that brain activity can change as a consequence of spinal manipulation, as detected by observed changes in the size of the ‘blind spot’ of the eye. The study is, in the abstract, described as a “large (500 subjects) double-blind controlled study”, in which 12 hypotheses were tested on various subgroups of these 500 people. We included in our review the “phase 2 procedure”, described in the study, in which twenty subjects with predetermined identified increased ‘blind spot’ findings were either subjected to the ‘correct’ treatment (i.e. manipulation of C2 on the same side as the enlarged cortical map) or the ‘incorrect’ treatment (i.e. manipulation of C2 on the opposite side of the enlarged cortical map). With the ‘correct’ treatment, the ‘blind spot’ was reported to have normalized in size, whereas this did not happen with the ‘incorrect’ treatment, in accordance with the theory.
A review of the method revealed that allocation to treatment was not reported to have been determined in a random fashion. The ‘blind spot’ was apparently measured subjectively, without optometric equipment, but had been “confirmed” by two examiners, both at base-line and follow-up. It was unclear what the label ‘double-blind’ referred to, as only the examiners were clearly reported to be blinded. It could be speculated that the study subjects were uninformed of the purpose of the study and perhaps the blind spot changes could not be affected by expectation bias but this was not stated in the paper. Study subjects were said to have been enrolled in ‘post-neurology programs at a variety of institutions’, hence possibly chiropractors, who may or may not have been naïve to the aims of the study. In addition, the definition of an enlarged blind spot was not provided. The result tables indicated that some type of continuous variable had been used, since t-tests were reported to have been used to test for differences between groups. It is therefore likely that the circumference or area of the blind spot was measured but, if so, it was not detailed. It was also not clear who undertook the statistical analysis and whether it was done without that person knowing which treatment was provided.
We did not include in this review the other tests reported in this study, in which the remaining study subjects (N = 480) were included, because they did not compare different treatment groups but tested other types of hypotheses. Conflict of interest, funding, and human research ethics committee approval were not reported, but this was not common at that time.
“Effects of contralateral extremity manipulation on brain function” [21]
In this study [21] (score 3.5/7; 50%), 2 × 31 healthy adults received either an upper extremity manipulation or a sham manipulation with an unloaded activator instrument in a random fashion. The blind spot size was estimated ‘manually’ before and immediately after intervention and found to have changed in a pre-hypothesized manner. This article was scrutinized in a previous review [5], in which it was noted that, although this is a randomized controlled trial, the reliability/reproducibility of the blind spot was uncertain, (ii) the study subjects were not described, and (iii) the statistical analysis was not reported to have been blinded. Hence methodological issues could potentially affect the validity of the results. Furthermore, ethics committee approval, trials registration, and conflict of interest were not reported.
“The Treatment of Autism Spectrum Disorder with Auditory Neurofeedback: Randomized Placebo Controlled Trial Using the Mente Autism Device” [31]
In this study [31] (score 3/7; 43%), 83 subjects, previously diagnosed as having an autistic spectrum disorder, were randomized into two groups: one intervention group (active) and one placebo/sham group. The treatment consisted of 12 weeks of home-based Neurofeedback therapy delivered by the Mente Autism therapy device, which produces binaural beats in the ears of the participants. These sounds were selected according to the child’s individual EEG pattern recorded by the device. The control group used an identical device, but the binaural beats where randomly generated. Outcome variables were 1) qEEG, which was defined as “statistical analysis of EEG” and stated to “allow highly precise measurement of brain activity and connectivity”, 2) dynamic computerized posturography, and 3) five autism spectrum disorder questionnaires (Autism Treatment Evaluation Checklist, Social Responsiveness Scale-Second Edition, the Behaviour Rating Inventory of Executive Function, the Autism Behaviour Checklist, and the Questions About Behavioural Function). There were two evaluations, one at enrolment and the second after 12 weeks of treatment. In total, 49/83 (59%) subjects dropped out and the statistical analysis included only 34 subjects.
Methodological problems were that the subjects were not shown to be blind or naïve, the randomisation was not stated to be concealed, the assessor and the person who analysed the data were not stated to be blind, and two of the outcome variables (qEEG and posturography) were not stated to be reliable and reproducible.
We noted that the study was funded by the Carrick Institute, the Plasticity Brain Center and Neurotech International Limited. It was reported that authors had no conflict of interest. This study was registered in ClinicalTrials.gov and was stated to have been approved by “own institutional review board”.
“Eye-Movement Training Results in Changes in qEEG and NIH Stroke Scale in Subject Suffering from Acute Middle Cerebral Artery Ischemic Stroke: A Randomized Control Trial” [30]
In this study [30], scoring 3/7; 43%, 34 subjects with non-disabling ischemic middle cerebral artery stroke were randomly divided into two groups using a computer program. The control group (n = 17) received the standard treatment (aspirin) and the treatment group (n = 17) received the standard treatment plus eye-movement training. It is possible that the study took place in a hospital in Cuba (assumption based on the acknowledgements) but this was not explicitly stated in the main text. The results were measured after one week with a stroke scale (the National Institute of Health Stroke Scale - NIHSS) and a visualization of brain function (qEEG). Significant differences in favour of the intervention group were reported for the qEEG but not for the stroke scale, although the title indicates that this was the case also for the stroke scale.
The major methodological problem in this study is that the qEEG was not reported as reliable and reproducible. In fact, the qEEG does not appear to be an easily quantified outcome variable, as it involves various computer-generated colours appearing in different parts of the brain, requiring an objective and reproducible measurement method and an understanding of what the different areas relate to and whether they are pertinent in the treatment of stroke. The study was described by the authors as “double-blind”, but as there is no sham intervention, only a control group (treated with aspirin), control subjects cannot have been blind to the type of treatment, which usually is the case when studies are described as ‘double-blind’. Descriptions of the other types of blinding are not given. Therefore, it is not possible to determine which types of bias could have affected the results. When there is no sham treatment given in the control group, and blinding is impossible, study subjects should instead be naïve to the treatment method, to avoid expectation bias, which is a good idea to prevent their post-treatment performance to be boosted through expectations. However, this was not reported.
The study was, to our knowledge, not reported in a trial registry prior to performance, so it was not possible to establish, if it was conducted according to the original plan and ethics permission was provided by “our Institution”; however, it is unknown what this refers to, the Carrick Institute or the (possible) Cuban hospital. According to the report, there were plans to perform a one-year follow-up the study. As this was published in January 2016, and the time of writing this present report is August 2019, it appears not to have been done. Conflict of interest was not reported but funding was stated to have been provided by the Carrick Institute and the Plasticity Brain Center.
“Effect of tone-based sound stimulation on balance performance of normal subjects: preliminary investigation” [29]
In this study [29] (score 2/7; 29%), thirty-nine subjects were subjected to various sounds and their balance tested on an unstable surface. Study subjects were said to be their own control and the outcome test was reported reproducible. The article includes a detailed explanation of the intervention (sound) but we found it more difficult to understand the study design, except that interventions were not provided randomly, which would be relevant, as we expect there would be a learning curve for balance. An explanation of how to combine variables was lacking; therefore we found it difficult to understand how variables were combined in the report. This is a preliminary study, and there is also a study on balance/ posturogaphic changes and music listening with a larger study sample (next summary).
“Posturographic Changes Associated with Music Listening” [32]
In this study [32] (score 2.5/7; 36%), 210 healthy volunteers were randomly divided into four groups, of which three listened to different types of music: i) Mozart, ii) a slow song of Nolwenn Leroy, or iii) a fast song of Nolwenn Leroy, and iv) one control group (who listened to ‘white noise’). Thereafter, another 60 healthy volunteers were included to listen to six other artists, resulting in 10 groups. Balance was studied using a force platform, before and after intervention, using a comprehensive assessment, with subjects having closed eyes and standing on a perturbing surface. This was tested after 10 min, 1 week and 1 month. In all, 35 (13%) of the subjects dropped out. Individual scores were held up against values that were said to be normative in relation to age groups. Outcome was reported in tables as numbers and percentages of subjects who changed to move into the ‘normal’ ranges.
Some music was reported to improve postural stability more than white noise. Methodologically, it would have been advantageous if the report had described the study subjects’ characteristics and whether they were naïve or not, the duration of the intervention, and whether the assessor and statistician were blinded to treatment group. Unfortunately, the numbers of losses reported in the table and text were not easily identified.
No conflict of interest was reported but it was “funded by the Carrick Institute”. The study was registered in ClinicalTrials.gov and authorized by their “own institutional review board”.
“The Effect of Off Vertical Axis and Multiplanar Vestibular Rotational Stimulation on Balance Stability and Limits of Stability” [20]
This study [20], previously reviewed [5] and re-reviewed by us, dealt with postural reactions to various positional interventions (1.5/7; 21%) and has been reported and discussed in some detail in a previous review [5]. It seems to be a description of a randomized controlled trial with four different subgroups receiving different interventions, but it is also possible that it is an outcome study simply observing these four groups. Numerous tests took place on only few study subjects and, apart from random allocation having been reported, a thorough explanation of the intervention procedure, and clearly reported losses and exclusion, the other methodological checklist items were unsatisfactory. Conflict of interest and funding were not reported. The ethics committee was their own “IRB” (abbreviation for Institutional Review Board) and there was no trial registration, although, if this was a simple outcome study rather than a randomized study, this would not be necessary. In sum, this was considered both a confusing and methodologically weak study. In the previous review [5] assistance had been needed from a methodologist to try to understand the study design, and this person was called in to re-verify the interpretation of the long and dense method section.
Summary of results
Because the methodological approach in the seven reviewed research articles had a low quality score, none exceeding 57%, we did not have confidence in the validity of the results and have not dealt with these.