R and Python for Data Science

Thursday, january 24, 2019, using machine learning to test efficient market hypothesis.

The purpose of this machine learning experiment is to justify or reject the efficient market hypothesis (EMH). EMH was developed by Eugene Fama who argued that asset prices fully reflect all known information and follow a random walk. Random walk was defined first in 1905 by bio-statistician Karl Pearson who defined a random walk as a process where any change is independent of a previous change and is fully unpredictable. On the other hand, Robert Shiller introduced the Behavioral Finance theory. He refers to changes in asset prices as a drunk at a lamppost taking random walks, but with an elastic tied to his ankle. As the drunk gets away from the lamppost he is tugged back. Asset prices revert to a mean. Both Fama and Shiller won Nobel prizes for their theories.

Setup: Load libraries and install if needed.

Get the s&p 500 index historical data from “1951-01-01” to “2018-12-31”.

###The file is hosted on Github with a link defined by a Google shortened URL. If for any reason Github is down, the file will be re-constructed from Yahoo finance.

Final check. If failure, check internet connection.

Explore the data, pre-process data..

Calculate response or outcome as one month forward gain or loss. It’s monthly returns and one month is represented by 21 trading days. Normalize the outcome from 0:1

Define four revert to the mean predictors :

Predictor 1: actual volatility..

VIX data as implied volatility for the next 30 days is not available for enough history. Instead we calculate actual volatility for the last 21 days and annualized based on 252 trading days. Note: Volatility is defined as variability of returns and not asset prices.

Predictor 2: Distance from year high.

Correction from 52 week high shows the position of the market. Dip = 0:10% Correction = 10:20% Bear = +20%

Predictor 3: Distance from 200 day moving average

The 200 day moving average is popular among technicians and institutions. Investors and traders are cautious if the price is below 200 DMA but prices may tend to be tugged to the mean.

Predictor 4: RSI Relative Strength index.

It’s an oscillator that moves from two extremes: Overbought and oversold.

Reduce data to predictors and response and delete NA caused by lag and lead functions.

Check for multicollinearity of predictors that might reduce accuracy..

There are some elements of multicollinearity but combined with having only four variables, there is no need for principal component analysis (PCA) to reduce correlation among predictors.

Use Caret to partition data into train and test 50% each.

Random walk guess. the efficient market hypothesis, efficient market hypothesis - revised: use the mean similar to a simple guess and assume an exponential growth as shown in the first chart above., use knn k-nearest neighbors algorithm, conclusion:.

Based on this limited machine learning experiment, we were unable to reduce the residual errors as per the above results. Efficient market hypothesis revised to include some exponential growth to compensate for risk and inflation stands. A future change in asset prices is random and a drunk could randomly return to the lamppost without being tied with an elastic.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Efficient Market Hypothesis tests in Python

PawelChruszczewski/EMH

Folders and files, repository files navigation.

In repository you can find some of the empirical tests of the efficient market hypothesis. The ones currently under review:

  • Returns over Short Horizons
  • Returns over Long Horizons
  • Reversal Effect (Fads hypothesis)
  • Dividend yield as predictor of stock returns

The fundamental data comes from the Compustat database and bossa website: https://info.bossa.pl/notowania/daneatech/omegasupercharts/

  • Jupyter Notebook 100.0%

A procedure for testing the hypothesis of weak efficiency in financial markets: a Monte Carlo simulation

  • Original Paper
  • Open access
  • Published: 31 March 2022
  • Volume 31 , pages 1289–1327, ( 2022 )

Cite this article

You have full access to this open access article

efficient market hypothesis python

  • José A. Roldán-Casas   ORCID: orcid.org/0000-0002-1785-4147 1 &
  • Mª B. García-Moreno García   ORCID: orcid.org/0000-0002-7981-8846 1  

3679 Accesses

Explore all metrics

The weak form of the efficient market hypothesis is identified with the conditions established by different types of random walks (1–3) on the returns associated with the prices of a financial asset. The methods traditionally applied for testing weak efficiency in a financial market as stated by the random walk model test only some necessary, but not sufficient, condition of this model. Thus, a procedure is proposed to detect if a return series associated with a given price index follows a random walk and, if so, what type it is. The procedure combines methods that test only a necessary, but not sufficient, condition for the fulfilment of the random walk hypothesis and methods that directly test a particular type of random walk. The proposed procedure is evaluated by means of a Monte Carlo experiment, and the results show that this procedure performs better (more powerful) against linear correlation-only alternatives when starting from the Ljung–Box test. On the other hand, against the random walk type 3 alternative, the procedure is more powerful when it is initiated from the BDS test.

Similar content being viewed by others

efficient market hypothesis python

Oil shocks and state-level stock market volatility of the United States: a GARCH-MIDAS approach

General diagnostic tests for cross-sectional dependence in panels.

efficient market hypothesis python

Quantile-based dynamic modeling of asymmetric data: a novel Burr XII approach for positive continuous random variables

Avoid common mistakes on your manuscript.

1 Introduction

The hypothesis of financial market efficiency is an analytical approach aimed at explaining movements in prices of financial assets over time and is based on the insight that prices for such assets are determined by the rational behaviour of agents interacting in the market. This hypothesis states that stock prices reflect all the information available for the agents when they are determined. Therefore, if the hypothesis is fulfilled, it would not be possible to anticipate price changes and formulate investment strategies to obtain substantial returns, i.e., predictions about future market behaviour could not be performed.

The validation of the hypothesis of efficiency in a given financial market is important for both investors and trade regulatory institutions. It provides criteria to assess whether the environment favours the state that all agents playing in the market are subject to equal footings in a “fair game”, where expectations of success and failure are equivalent.

Although the theoretical origin of the efficiency hypothesis arises from the work of Bachelier in 1900, Samuelson reported the theoretical foundations for this hypothesis in 1965. On the other hand, Fama established, for the first time, the concept of an efficient market . A short time later, the concept of the hypothesis of financial market efficiency emerged from the work of Roberts ( 1967 ), which also analysed efficiency with an informational outlook, leading to a rating for efficiency on three levels according to the rising availability of information for agents: weak , semi-strong and strong . Thus, weak efficiency means that information available to the agents is restricted to the historical price series; semi-strong efficiency means that all public information is available for all agents; and strong efficiency means that the set of available information includes the previously described information and other private data, known as insider trading.

The weak form of the efficiency hypothesis has been the benchmark of the theoretical and empirical approaches throughout history. In relation to the theoretical contributions, most link the weak efficiency hypothesis to the fact that financial asset prices follow a random walk (in form 1, 2 or 3) or a martingale. However, since it is necessary to impose additional restrictions on the underlying probability distributions that lead to one of the forms of random walk to obtain testable hypotheses derived from the martingale model, it seems logical to assume any of these forms as a pricing model.

Specifically, the types of random walks with which the weak efficiency hypothesis is identified are conditions that are established on the returns of a financial asset, which are relaxed from random walk 1 (which is the most restrictive) to random walk 3 (which corresponds to the most plausible in economic terms since it is not as restrictive). This makes it possible to evaluate the degree of weak efficiency.

Although numerous procedures have traditionally been used to test the weak efficiency of a financial market according to the random walk model, many test only some necessary, but not sufficient, condition of the aforementioned model in any of its forms (this is the case, for example, of the so-called linear methods that test only the necessary uncorrelation for the three types of random walk). In any case, applying one of these tests can lead to an incorrect conclusion. On the other hand, there are methods that directly test a specific type of random walk.

Through the strategic combination of both types of methods, a procedure that allows us to detect if a time series of financial returns follows a random walk and, if so, its corresponding type, is proposed. The objective is to reduce the effect of the above-mentioned limitation of some traditional methods when studying the weak efficiency hypothesis.

Consequently, the work begins (Sect.  2 ) by describing how the hypothesis of efficiency in a financial market is evaluated based on the so-called joint-hypothesis problem (Fama 1991 ). The different methods traditionally applied to test the weak efficiency in the forms that establish the random walk types are detailed in Sect.  3 . Next, a procedure is proposed to detect if a return series associated with a given price index follows a random walk and, if so, what type it is. This procedure combines methods that test only a necessary, but not sufficient, condition for the fulfilment of the random walk hypothesis and methods that directly test for a particular type of random walk. The proposed procedure is evaluated by means of a Monte Carlo experiment, and the results are presented in Sect.  4 . Finally, Sect.  5 contains the main conclusions of the study.

2 Evaluation of the efficiency hypothesis

To evaluate the efficiency of a financial market, Bailey ( 2005 ) proposes a procedure on the basis of the joint-hypothesis problem of Fama ( 1991 ), that is, considering, in addition to the available information, the existence of an underlying model to fix the prices of financial assets. Specifically, based on the aforementioned model and the cited set of information, the criterion that determines the efficiency of the market is established to create a testable hypothesis. Thus, by means of some method designed to test the hypothesis of efficiency, whether the collected data (observed prices) evince this hypothesis is tested, which would imply the efficiency or inefficiency of the market. Figure  1 shows this whole process schematically.

figure 1

Source: Bailey ( 2005 )

Scheme of the procedure to evaluate the efficiency of a market.

Clearly, in this procedure, the efficiency of a market depends on the pricing model and the information set assumed. Thus, if the conclusion for a market is efficiency (inefficiency) given a pricing model and a specific information set, it is possible that inefficiency (efficiency) would be concluded if another model and/or different set are assumed.

Traditionally, the martingale and the random walk are assumed to be models to fix the price P t of a financial asset whose continuously compounded return or log return is given by the expression

2.1 Martingale

Samuelson ( 1965 ) and Fama ( 1970 ), understanding the market as a fair game, raised the idea of efficiency from an informational outlook, with the less restrictive model, the martingale model. In this case, if \(\Omega_{t}\) is the available information set at time t ,

That is, in an efficient market, it is not possible to forecast the future using the available information, so the best forecast for the price of an asset at time \(t + 1\) is today's price. The second condition of expression ( 1 ) implies

which reflects the idea of a fair game and allows us to affirm that the return \(r_{t}\) constitutes a martingale difference sequence, i.e., it satisfies the conditions

2.2 Random walk

The random walk was initially formulated as

where \(r_{t}\) is considered an independent and identically distributed process with mean 0 and constant variance, which assumes that changes in prices are unpredictable and random, a fact that is inherent to the first versions of the efficient market hypothesis. Nevertheless, several studies have shown that financial data are inconsistent with these conditions.

Campbell et al. ( 1997 ) adjusted the idea of random walks based on the formulation

where μ is a constant term. By establishing conditions on the dependency structure of the process \(\{ \varepsilon_{t} \}\) (which the authors call increments), they distinguish three types of random walks: 1, 2 and 3. In this case, the change in the price or return is

So the conditions fixed on the increments \(\{ \varepsilon_{t} \}\) can be extrapolated integrally to the returns { \(r_{t}\) }.

Random walk 1 (RW1) : IID increments/returns

In this first type, \(\varepsilon_{t}\) is an independent and identically distributed process with mean 0 and variance \(\sigma^{{2}}\) , or \(\varepsilon_{t} \sim {\text{IID(0,}}\sigma^{{2}} {)}\) in abbreviated form, which implies \(r_{t} \sim {\text{IID(}}\mu {,}\sigma^{{2}} {)}\) . Thus, formulation (2) is a particular case of this type of random walk for \(\mu = 0\) . Under these conditions, the constant term μ is the expected price change or drift. If, in addition, normality of \(\varepsilon_{t}\) is assumed, then (3) is equivalent to arithmetic Brownian motion.

In this case, the independence of \(\varepsilon_{t}\) implies that random walk 1 is also a fair game but in a much stronger sense than martingale, since the mentioned independence implies not only that increments/returns are uncorrelated but also that any nonlinear functions of them are uncorrelated.

Random walk 2 (RW2) : independent increments/returns

For this type of random walk, \(\varepsilon_{t}\) (and by extension \(r_{t}\) ) is an independent but not identically distributed process (INID). RW2 contains RW1 as a particular case.

This version of the random walk accommodates more general price generation processes and, at the same time, is more in line with the reality of the market since, for example, it allows for unconditional heteroskedasticity in \(r_{t}\) , thus taking into account the temporal dependence of volatility that is characteristic of financial series.

Random walk 3 (RW3) : uncorrelated increments/returns

Under this denomination, \(\varepsilon_{t}\) (and therefore \(r_{t}\) ) is a process that is not independent or identically distributed but is uncorrelated; that is, cases are considered

which means there may be dependence but no correlation.

This is the weakest form of the random walk hypothesis and contains RW1 and RW2 as special cases.

As previously mentioned, financial data tend to reject random walk 1, mainly due to non-compliance with the constancy assumption of the variance of \(r_{t}\) . In contrast, random walks 2 and 3 are more consistent with financial reality since they allow for heteroskedasticity (conditional or unconditional) in \(r_{t}\) . Consequently, we could say that RW2 is the type of random walk closest to the martingale [actually, RW1 and RW2 satisfy the conditions of the martingale, but in a stronger sense (Bailey 2005 )].

2.3 Martingale vs. random walk

The random walk hypothesis, in its three versions, and that of the martingale are captured in an expression that considers the kind of dependence that can exist between the returns r of a given asset at two times, t y \( t{ + }k\) ,

where, in principle, \(f( \cdot )\) and \(g( \cdot )\) are two arbitrary functions and may be interpreted as an orthogonality condition. For appropriately chosen \(f( \cdot )\) and \(g( \cdot )\) , all versions of the random walk hypothesis and martingale hypothesis are captured by (4). Specifically,

If condition (4) is satisfied only in the case that \(f( \cdot )\) and \(g( \cdot )\) are linear functions, then the returns are serially uncorrelated but not independent, which is identified with RW3. In this context, the linear projection of \(r_{t + k}\) onto the set of its past values \(\Omega_{t}\) satisfies

If condition (4) is satisfied only when \(g( \cdot )\) is a linear function but \(f( \cdot )\) is unrestricted, then the returns are uncorrelated with any function of their past values, which is equivalent to the martingale hypothesis. In this case,

If condition (4) holds for any \(f( \cdot )\) and \(g( \cdot )\) , then returns are independent, which corresponds to RW1 and RW2. In this case,

where d.f. denotes the probability density function.

Table 1 summarizes the hypotheses derived from expression ( 4 ).

Since, in practice, additional restrictions are usually imposed on the underlying probability distributions to obtain testable hypotheses derived from the martingale model, which results in the conditions of some of the random walk versions Footnote 1 (Bailey 2005 , pp. 59–60), it is normal to assume the random walk as a pricing model.

Therefore, if the available information set is the historical price series and the pricing model assumed is the random walk, weak efficiency is identified with some types of random walks.

3 Evaluation of the weak efficiency

3.1 traditional methods.

The methods traditionally used to test the weak form of efficiency, as established by some of the random walk types, are classified into two groups depending on whether they make use of formal statistical inference.

RW2 is analysed with methods that do not use formal inference techniques ( filter rules and technical analysis Footnote 2 ) because this type of random walk requires that the return series is INID. In this case, it would be very complicated to test for independence without assuming identical distributions (particularly in the context of time series) since the sampling distributions of the statistics that would be constructed to carry out the corresponding test could not be obtained (Campbell et al. 1997 , p. 41).

On the other hand, the methods that apply formal inference techniques for the analysis can be classified into two groups according to whether they allow direct testing of a type of random walk or only some necessary, but not sufficient, condition for its fulfilment.

The methods of the second group include the Bartlett test, tests based on the Box–Jenkins methodology, the Box–Pierce test, the Ljung–Box test and the variance ratio test. These methods test only the uncorrelation condition on the return series (they are also called linear methods Footnote 3 ) necessary for any type of random walk. Since these tests do not detect non-linear relationships Footnote 4 that, if they exist, would entail the dependence of the series, rejection of the null hypothesis would imply no uncorrelation of the series and, consequently, the non-existence of any type of random walk.

On the other hand, for tests that try to detect ARCH effects, rejection of the null hypothesis involves only the acceptance of non-linear relationships, which does not necessarily imply that the series is uncorrelated.

Other methods allow direct determination of whether the return series follows a specific type of random walk. This means that these procedures also take into account the possibility of non-linear relationships in the series, either because they are considered by the null hypothesis itself or because the cited methods have power against alternatives that capture these relationships (they would be, therefore, non-linear methods). These methods include those that allow testing of the random walk type 1 (BDS test, runs test and sequences and reversals test) and one that tests for a type 3 random walk (variance ratio test, which considers the heteroskedasticity of the series).

Figure  2 shows the classification established in this section for the different methods that are traditionally used to test the hypothesis of weak efficiency.

figure 2

Source: own elaboration

Methods traditionally used to test the random walk hypothesis (weak efficiency).

The financial literature shows that the methods described above have traditionally been applied to test the weak efficiency hypothesis in financial markets.

Correlation tests to determine the efficiency of a market were first used when Fama ( 1965 ) and Samuelson ( 1965 ) laid the foundations of efficient market theory. From these beginnings, the works developed by Moore ( 1964 ), Theil and Leenders ( 1965 ), Fama ( 1965 ) and Fisher ( 1966 ), among others, stand out.

These tests were used as almost the only tool to analyse the efficiency of a market until, in the 1970s, seasonal effects and calendar anomalies became relevant for the analysis. Then, new methodologies incorporating these effects emerged, such as the seasonality tests applied by Roseff and Kinney ( 1976 ), French ( 1980 ) and Gultekin and Gultekin ( 1983 ).

In the 1990s, studies that analysed the efficiency hypothesis in financial markets using so-called traditional methods began to appear. This practice has continued to the present day, as evidenced by the most prominent empirical works on financial efficiency in recent years.

Articles using technical analysis to test for the efficiency of a financial market include Potvin et al. ( 2004 ), Marshall et al. ( 2006 ), Chen et al. ( 2009 ), Alexeev and Tapon ( 2011 ), Shynkevich ( 2012 ), Ho et al. ( 2012 ), Leković ( 2018 ), Picasso et al. ( 2019 ) and Nti et al. ( 2020 ).

On the other hand, among the studies that apply methods that test only a necessary, but not sufficient, condition of the random walk hypothesis, the most numerous are those that use correlation tests . In this sense, we can cite the studies developed by Buguk and Brorsen ( 2003 ), DePenya and Gil-Alana ( 2007 ), Lim et al. ( 2008b ), Álvarez-Ramírez and Escarela-Pérez ( 2010 ), Khan and Vieito ( 2012 ), Ryaly et al. ( 2014 ), Juana ( 2017 ), Rossi and Gunardi ( 2018 ) and Stoian and Iorgulescu ( 2020 ). Research applying the variance ratio test (also very numerous) includes Hasan et al. ( 2003 ), Hoque et al. ( 2007 ), Righi and Ceretta ( 2013 ), Kumar ( 2018 ), Omane-Adjepong et al. ( 2019 ) and Sánchez-Granero et al. ( 2020 ). Finally, ARCH effect tests have been used in several papers, such as Appiah-Kusi and Menyah ( 2003 ), Cheong et al. ( 2007 ), Jayasinghe and Tsui ( 2008 ), Lim et al. ( 2008a ), Chuang et al. ( 2012 ), Rossi and Gunardi ( 2018 ) and Khanh and Dat ( 2020 ).

Regarding methods that directly test a type of random walk, the runs test has been used in works such as Dicle et al. ( 2010 ), Jamaani and Roca ( 2015 ), Leković ( 2018 ), Chu et al. ( 2019 ) and Tiwari et al. ( 2019 ). Meanwhile, the application of the BDS test can be found in studies such as Yao and Rahaman ( 2018 ), Abdullah et al. ( 2020 ), Kołatka ( 2020 ) and Adaramola and Obisesan ( 2021 ).

Therefore, the proposal of a procedure (Sect.  3.2 ) that reduces the limitations of traditional methods would be a novel contribution to the financial field as far as the analysis of the weak efficiency hypothesis is concerned. Moreover, it would be more accurate than the traditional methods in determining whether a return series follows a random walk.

3.2 Proposed procedure

By strategically combining the methods analysed in the previous section, we propose a procedure to test the random walk hypothesis that can be started either from a method that tests only a necessary, but not sufficient, condition or from one that directly tests a specific type of random walk (1, 2 or 3).

On the one hand, if the procedure is started with a method from the first group and shows correlation of the return series, it would not follow any type of random walk. In the opposite case (uncorrelation), an ARCH effect test is recommended to determine the type of random walk. Thus, if ARCH effects are detected, which implies the existence of non-linear relationships, it should be concluded that the series is RW3. Otherwise, the series will be RW1 or RW2, and a non-formal statistical inference technique can be applied to test for type 2. Finally, if the RW2 hypothesis is rejected, then the series is necessarily RW1.

On the other hand, regarding the methods that directly test a type of random walk, it is proposed to start the procedure with one that tests RW1. Thus, if the null hypothesis is rejected with this method, it cannot be ruled out that the series is RW2, RW3 or not a random walk at all. Before affirming that we are not facing any type of random walk, first it is suggested to check for type 2 by applying a non-formal statistical inference technique. If the RW2 hypothesis cannot be accepted, then RW3 is tested. In this case, rejection of the RW3 hypothesis implies that the series is not a random walk.

Figure  3 schematically shows the described procedure.

figure 3

Source: Own elaboration

Procedure for testing the random walk hypothesis.

The acceptance of market inefficiency (i.e., that the return series is not RW) occurs when the price series analysed shows non-randomness, a structure that can be identified, long systematic deviations from its intrinsic value, etc. (even the RW3 hypothesis implies dependence but no correlation). This indicates a certain degree of predictability, at least in the short run, i.e., it is possible to forecast both the asset returns and the volatility of the market using past price changes. These forecasts are constructed on the basis of models reflecting the behaviour of financial asset prices.

Among the models that allow linear structures to be captured, the ARIMA and ARIMAX models stand out. Moreover, ARCH family models are used for modelling and forecasting the conditional volatility of asset returns. On the other hand, when the return series presents non-linear relationships, it is common to use non-parametric and non-linear models, including those based on neural networks and machine learning techniques. Finally, hybrid models (a combination of two or more of the procedures described above) consider all the typical characteristics of financial series.

4 Monte Carlo experiment

The procedure introduced in the previous section is evaluated by means of a Monte Carlo experiment, Footnote 5 considering the variance ratio test proposed by Lo and MacKinlay ( 1988 ) Footnote 6 and the Ljung–Box test (1978), when started from methods that test only some necessary, but not sufficient, condition of the random walk hypothesis; and the BDS test Footnote 7 and the runs test when starting from methods that directly test the mentioned hypothesis. If the procedure requires the application of an ARCH effect test to decide between random walks 1 and 3, ARCH models up to order 4 are used.

To conduct this analysis, return series are generated from two different models because the objective is twofold: to evaluate the performance of the procedure in the analysis of the random walk 1 hypothesis against the linear correlation alternative, on the one hand, and against that of the random walk 3, on the other. Footnote 8

Thus, the BDS, runs, variance ratio and Ljung–Box tests are applied to each generated return series. Then, if the RW1 hypothesis is rejected by the first two tests, the variance-ratio test is applied to determine whether the series is at least RW3. On the other hand, if the random walk hypothesis is not rejected with the first two tests, an ARCH effect test is applied to discern between RW1 and RW3. The process is replicated 10,000 times for each sample size T and each value of the parameter involved in the generation of the series (see the whole process in Fig.  4 ).

figure 4

Iteration of the simulation process.

Nominal size

Before analysing the Monte Carlo powers of the procedure initiated from the different indicated tests, the corresponding nominal size is estimated; that is, the maximum probability of falsely rejecting the null hypothesis of random walk 1 is calculated in each case. Since the different ways of executing the proposed procedure contemplate the possibility of applying tests sequentially to make a decision, we must not expect, in general, the nominal size of each case to coincide with the significance level α which is fixed in each of the individual tests.

To estimate the mentioned nominal size, return series that follow a random walk 1 are generated

where \(\varepsilon_{t} \sim iid(0,1)\) . Specifically, 10,000 series of size T are generated, and the tests required by the specific way in which the procedure is being applied are performed on each data series independently, not sequentially, with significance level α . The reiteration of this process allows us to determine, for each T , the number of acceptances and rejections of the null hypothesis (random walk 1) that occur with the independent application of each test. This makes possible the estimation of the nominal size of the procedure in each case as the quotient of the total rejections of the null hypothesis divided by the total number of replications (10,000 in this case).

The process described in the previous paragraph was performed for the sample sizes T  = 25, 50, 100, 250, 500 and 1000 and significance levels α  = 0.02, 0.05 and 0.1 [application of the process in Fig.  4 for expression ( 5 )]. The results (Table 2 ) indicate, for a given value T , the (estimated) theoretical size of the procedure when a significance level α is set in the individual tests required by the cited procedure initiated from a specific method. For example, if for \(T = 100\) the researcher sets a value of \(\alpha = 0.05\) in the individual tests and wishes to apply the procedure initiated from the variance ratio test, he will be working with a (estimated) theoretical size of 0.0975.

The estimated nominal size of the procedure when starting from methods that directly test the hypothesis of random walk 1 is much better in the case of the runs test since it practically coincides with the significance level α fixed (in the individual tests) for any sample size T . However, size distortions (estimated values far from the level α ) are evident when the procedure is initiated from the BDS test, and the results are clearly affected by T . In effect, the greatest distortions occur for small sample sizes and decrease as T increases (at \(T = 1000\) , the estimated nominal size for each α is 0.0566, 0.133244 and 0.2214, respectively, i.e., approximately \(2\alpha\) ).

Since the variance ratio test and the Ljung–Box test do not directly test the random walk 1 hypothesis—to estimate the nominal size of the procedure initiated from any of them, it is necessary to apply tests sequentially—the results that appear in Table 2 for these two cases are expected in the sense that the estimates of the respective nominal sizes for each T are greater than the significance level α . In this context of size distortion, the best results correspond to the case of the variance ratio test, with estimated values very close to the significance level α for small sample sizes ( \(T = 25\) and 50) but that increase as T increases (note that at \(T = 1000\) , for each value of α , the nominal size is approximately double that at \(T = 25\) , i.e., approximately \(2\alpha\) ). In the case of the Ljung–Box test, where the distortion is greater, the sample size T hardly influences the estimated values of the nominal size since, irrespective of the value of T , they remain approximately 10%, 21% and 37% for levels 0.02, 0.05 and 0.1, respectively.

Empirical size and Monte Carlo power

(b1) The performance of the procedure for testing the random walk 1 hypothesis against the only linear correlation alternative (among the variables of the return series generating process) is analysed using the model

with \(r_{0} = 0\) and \(\varepsilon_{t} \sim iid(0,1)\) . By means of (6), ten thousand samples of sizes T  = 25, 50, 100, 250, 500 and 1000 of the series \(r_{t}\) are generated for each value of parameter \(\phi_{1}\) considered: − 0.9, − 0.75, − 0.5, − 0.25, − 0.1, 0, 0.1 0.25, 0.5. 0.75 and 0.9. In this way, the model yields return series that follow a random walk 1 (particular case in which \(\phi_{1} = 0\) ) and, as an alternative, series with a first-order autoregressive structure (cases in which \(\phi_{1} \ne 0\) ), i.e., they would be generated by a process whose variables are correlated. Therefore, when the null hypothesis is rejected, some degree of predictability is admitted since by modelling the above autoregressive structure with an ARMA model, it is possible to predict price changes on the basis of historical price changes.

The procedure, starting from each of the considered tests (BDS, runs, Ljung–Box and variance ratio), was applied to the different series generated by the combinations of values of T and \(\phi_{1}\) with a significance level of 5% [application of the process in Fig.  4 for expression ( 6 )]. Then, we calculated the number of times that the different decisions contemplated by the two ways of applying the procedure are made (according to whether we start from a method that does or does not directly test the random walk hypothesis).

From the previous results, we calculate, for each sample size T , the percentage of rejection of the null hypothesis (random walk 1) when starting from each of the four tests considered, depending on the value of parameter \(\phi_{1}\) . Since \(\phi_{1} = 0\) implies that the null hypothesis is true, in this particular case, the calculations represent the empirical probability of committing a type I error for the procedure in the four applications, i.e., the empirical size . However, when \(\phi_{1} \ne 0\) , the cited calculations represent the Monte Carlo power of each version of the procedure since for these values of \(\phi_{1}\) , the null hypothesis is false.

Empirical size

The empirical sizes (Table 3 ) that resulted from the different cases analysed nearly coincide with the corresponding theoretical probabilities calculated for \(\alpha = 0.05\) (see Table 2 ). Therefore, there is empirical confirmation of the size distortions that appear in the procedure according to the test from which it is started. In effect,

When the procedure is initiated from methods that directly test the random walk 1 hypothesis, the results confirm that for the runs test, the size of the procedure remains approximately 5% (the significance level) for all T . Nevertheless, when initiating from the BDS test, a very high size distortion is produced for small sample sizes (0.6806 and 0.5425 at \(T = 25\) and 50, respectively), but the distortion decreases as T increases (it reaches a value of 0.1334 at \(T = 1000\) ).

The size distortions exhibited by the procedure when starting with methods that test only a necessary, but not sufficient, condition of the random walk hypothesis, are less pronounced when the procedure is applied starting from the variance ratio test than when starting from the Ljung–Box test. Likewise, in the former case, the empirical size increases with the sample size T from values close to the significance level (0.05) to more than double the significance level (from 0.0603 at \(T = 25\) to 0.1287 at \(T = 1000\) ). In the latter case (Ljung–Box), the values between which the empirical size oscillates (18% and 22%) do not allow us to affirm that there exists an influence of T .

Monte Carlo power

Table 4 reports, for each sample size T , the power calculations of the procedure started from each of the four tests considered in this study, i.e., the probability of rejecting the null hypothesis (random walk 1) with each version of the procedure on the assumption that the hypothesis is false. Likewise, since several alternatives to the null hypothesis (values that satisfy \(\phi_{1} \ne 0\) ) are considered, the corresponding power functions of the cited versions of the procedure are obtained and plotted in a comparative way for each T (Fig.  5 ).

figure 5

Monte Carlo power of the procedure when starting from each test against linear correlation-only alternatives.

For each sample size T and regardless of the test from which the procedure is started, the corresponding probabilities of rejection of the random walk 1 hypothesis are distributed symmetrically around the value \(\phi_{1} = 0\) (random walk 1 hypothesis). Additionally, these probabilities tend to unity as \(\left| {\phi_{1} } \right|\) increases, reaching 1 for values of \(\left| {\phi_{1} } \right|\) increasingly closer to 0 as the sample size T increases. The velocity of the described behaviour depends on the test from which the procedure is started:

For the two smallest sample sizes ( \(T = 25\) and 50), a power of 1 is hardly achieved for any of the alternatives. Only at \(T = 50\) is the power approximately 100 percent, with the procedure initiated from any of the four tests, for \(\left| {\phi_{1} } \right| \ge 0.75\) . On the other hand, at \(T = 25\) , the estimated powers of the procedure initiated from the BDS test for \(\left| {\phi_{1} } \right| \le 0.5\) are much higher than those presented by the other cases. A similar situation occurs at \(T = 50\) , but with less pronounced differences between what the procedure with the BDS test and the other cases yield and restricted to the alternatives with \(\left| {\phi_{1} } \right| \le 0.25\) .

From sample size 100, we observe differences in the convergence to unity of the estimated powers according to the test from which the procedure is initiated. Thus, when starting from the Ljung–Box test and the variance ratio test, a power of approximately 1 is achieved for \(\left| {\phi_{1} } \right| \ge 0.5\) at \(T = 100\) , whereas for larger sample sizes, convergence to 1 is nearly reached for \(\left| {\phi_{1} } \right| \ge 0.25\) . On the other hand, when the procedure is started from the BDS test, a power of 1 is reached for \(\left| {\phi_{1} } \right| \ge 0.75\) at \(T = 100\) and for \(\left| {\phi_{1} } \right| \ge 0.5\) at \(T \ge 250\) (note that at \(T = 1000\) , the estimated power does not exceed 0.89 for \(\left| {\phi_{1} } \right| = 0.25)\) . Finally, when the procedure is initiated from the runs test, the value of \(\left| {\phi_{1} } \right|\) for which the powers achieve unity decreases as the sample size T increases beyond 100. Specifically, at \(T = 100\) , unity is reached for \(\left| {\phi_{1} } \right| \ge 0.75\) ; at \(T = 250\) , for \(\left| {\phi_{1} } \right| \ge 0.5\) ; and at \(T = 1000\) , for \(\left| {\phi_{1} } \right| \ge 0.25\) (at \(T = 500\) , the power is approximately 0.95 for \(\left| {\phi_{1} } \right| = 0.25\) ). The plots in Fig.  5 show that the power function of the procedure initiated from the Ljung–Box test is always above the other power functions, i.e., it is uniformly more powerful for \(T \ge 100\) .

Regardless of the test from which the procedure is started, a power of 1 is not achieved for \(\left| {\phi_{1} } \right| = 0.1\) for any sample size, not even at \(T = 1000\) (the best result corresponds to the Ljung–Box case with an estimated power of approximately 0.91, followed by the variance ratio and runs cases with values close to 0.8 and 0.53, respectively; the BDS case yields the worst result of approximately 0.18).

At this point, we can say that the power of the procedure has been analysed, that is, its capability of rejecting the null hypothesis (random walk 1) when the null hypothesis is false. As already mentioned, for \(\phi_{1} \ne 0\) , Model (6) yields a series that does not follow any type of random walk. However, the proposed procedure contemplates random walk 3 among the possible decisions. Therefore, if from the powers calculated for each version of the procedure, we subtract the portion that corresponds to the (wrong) decision of random walk 3, we obtain the power that the procedure initiated from each test actually has, i.e., its capability to reject the null hypothesis in favour of true alternatives when the null hypothesis is false.

In this sense, Table 4 and Fig.  6 report, for each sample size T , the power calculations of the procedure initiated from each of the tests considered after subtracting the effect of the (false) alternative of random walk 3. Furthermore, the cited powers and those initially calculated for each version of the procedure are compared for each T in Figs. 9, 10, 11 and 12 (Appendix).

figure 6

Real Monte Carlo power of the procedure when starting from each test against linear correlation-only alternatives.

When the procedure is started from the runs test, the variance ratio test or the Ljung–Box test (Appendix Figs. 10, 11, 12), what we call real power hardly differs from that initially calculated for each sample size T (these slight differences occur for \(\left| {\phi_{1} } \right| \le 0.5\) with \(T \le 100\) and \(\left| {\phi_{1} } \right| = 0.1\) with \(T \ge 250\) ). Therefore, all the above-mentioned findings in relation to the power of these three cases is maintained.

Nevertheless, there are considerable differences between the real power and that initially calculated when the procedure is started from the BDS test. In effect, the initial calculations indicated that this version of the procedure was the most powerful for \(\left| {\phi_{1} } \right| \le 0.5\) and \(\left| {\phi_{1} } \right| \le 0.25\) for \(T = 25\) and \(T = 50\) , respectively (with all the values greater than 0.5), but the results in Table 4 and Fig.  6 show that the powers in these cases are actually much lower (0.2 is hardly reached in one single case). Although these differences persist for \(T = 100\) , also in the context of \(\left| {\phi_{1} } \right| \le 0,25\) , they start to decrease as the sample size increases from \(T \ge 250\) (we could say that, for \(T \ge 500\) , there are minimal differences between the real power and the initially calculated power).

Consequently, in terms of the power referring only to true alternatives (linear correlation in this case), the procedure initiated from the Ljung–Box test is the most powerful.

(b2) The performance of the procedure for testing the random walk 1 hypothesis against only the non-linear alternative (among the variables of the return series generating process) is analysed by means of an ARCH(1) model.

where \(h_{t}\) and \(\varepsilon_{t}\) are independent processes of each other such that \(h_{t}\) is stationary and \(\varepsilon_{t} \sim iid(0,1)\) , with \(\alpha_{0} > 0\) and \(\alpha_{1} \ge 0\) . Specifically, taking \(r_{0} = 0\) in (7), 10,000 samples of sizes T  = 25, 50, 100, 250, 500 and 1000 of the series \(r_{t}\) are generated for \(\alpha_{0} = 1\) and each value of \(\alpha_{1}\) considered: 0, 0.1, 0.2, 0.3, 0.4 and 0.5. Footnote 9 In the particular case in which \(\alpha_{1} = 0\) , Model (7) yields a return series that follows a random walk 1 and, for \(\alpha_{1} > 0\) , series that are identified with a random walk 3, i.e., they would be generated by a process whose variables are uncorrelated but dependent (there are non-linear relationships among the variables Footnote 10 ). Therefore, when random walk 3 is accepted, it is possible to develop models that allow market volatility to be predicted (model types ARCH and GARCH).

The procedure, starting from each of the four tests considered in this study, was applied to the different series generated by the combination of values for T and \(\alpha_{1}\) with a significance level of 5% [application of the process in Fig.  4 for expression ( 7 )]. Then, we calculated the number of times that the different decisions contemplated by the two already known ways of applying the procedure were made.

On the basis of the results indicated in the previous paragraph and analogously to that described in Section (b1), we calculate, for each sample size T , the empirical size and the Monte Carlo power of each version of the procedure. In this context, \(\alpha_{1} = 0\) implies that the random walk 1 hypothesis is true, and \(\alpha_{1} > 0\) implies that it is not (it corresponds to a random walk 3).

Since in this case the null hypothesis is again random walk 1, the obtained empirical sizes are nearly identical to those calculated in Section (b1) (the results are available on request).

Table 5 and Fig.  7 show, respectively, the power calculations of each version of the procedure and the plots of the corresponding power functions (in terms of parameter \(\alpha_{1}\) ) for each sample size T .

figure 7

Monte Carlo power of the procedure when starting from each test against non-linear alternatives only.

The estimated power of the procedure when starting from the runs test is approximately 0.05 for all alternatives, irrespective of the value of T . In the other cases, the power is influenced by parameters T and \(\alpha_{1}\) ; as the values of these parameters increase, the power tends to unity.

Fig.  7 shows that the procedure initiated from the BDS test is uniformly more powerful when \(T \le 100\) , and the difference between the estimated powers of the procedure with the BDS test and those of the other cases becomes more pronounced as the sample size decreases. When \(T = 25\) , the estimated power of the procedure initiated from the BDS test is approximately 0.7 for all alternatives, while the estimated power when starting from the Ljung–Box test and the variance ratio test increases with \(\alpha_{1}\) from 0.2 and 0.08 to 0.35 and 0.23, respectively. The difference in the estimated power in favour of the procedure initiated from the BDS test decreases with increasing sample size T , especially at high values of \(\alpha_{1}\) . Likewise, in all three cases, the estimated power improves when the sample size increases, but a power of 1 is not reached in any case (at \(T = 100\) , the estimated power for \(\alpha_{1} = 0.5\) is approximately 0.8 in all three cases).

For \(T \ge 250\) , the estimated power of the procedure initiated from the BDS test, Ljung–Box test and variance ratio test converges to 1 as \(\alpha_{1}\) increases. In all these cases, the value of \(\alpha_{1}\) for which the power achieves unity decreases as the sample size increases. Thus, at \(T = 250\) , unity is reached for \(\alpha_{1} = 0.5\) ; at \(T = 500\) , for \(\alpha_{1} \ge 0.3\) , and at \(T = 1000\) , for \(\alpha_{1} \ge 0.2\) . On the other hand, the plots in Fig.  7 show that the power function of the procedure initiated from the Ljung–Box is always above the other power functions, i.e., it is uniformly more powerful for \(T \ge 250\) . However, the difference in the estimated power (in favour of the procedure initiated with the Ljung–Box test) is not pronounced.

Finally, regardless of the test from which the procedure is started, a power of 1 is not achieved for \(\alpha_{1} = 0.1\) for any sample size, not even \(T = 1000\) (the best result corresponds to the Ljung–Box case with an estimated power of approximately 0.83, followed by the variance ratio case with a value of 0.82; the BDS case yields the worst result–without considering the runs case–of approximately 0.74).

In this case, for alternative \(\alpha_{1} > 0\) , Model (7) yields a series that follows a random walk 3, and the proposed procedure contemplates “non-random walk” among the possible decisions. Therefore, it is interesting to analyse, with each version of the procedure, to what extent the rejection of the random walk 1 hypothesis (when this is false) leads correctly to random walk 3. In other words, we are interested in determining what part of the power calculated in each case corresponds to the acceptance of the random walk 3 hypothesis (under the assumption that the hypothesis of random walk 1 is false). According to Section b1), we calculate the power that the procedure initiated from each test actually has. Thus, Table 5 and Fig.  8 report, for each sample size T , Monte Carlo estimates of the probability of accepting the random walk 3 hypothesis (given that the type 1 is false) with the procedure initiated from each of the tests considered (i.e., the real power). Additionally, the cited real powers and those initially calculated for each version of the procedure are compared for each T in Figs. 13 , 14 , 15 and 16 (Appendix).

figure 8

Real Monte Carlo power of the procedure when starting from each test against non-linear alternatives only.

As shown (Fig. 13), almost all the powers of the procedure initiated from the BDS test correspond to the acceptance of random walk 3 since the so-called real power hardly differs from that initially calculated for each sample size T . For large sample sizes, the real power tends to stabilize at approximately 0.96 as \(\alpha_{1}\) increases.

Similar behaviour is observed when the procedure is started from the variance ratio test, with the exception that, for \(T \ge 250\) , the real powers become lower than those initially calculated as \(\alpha_{1}\) increases (for example, at \(T = 1000\) , the estimated power for \(\alpha_{1} = 0.5\) was initially 1, but only 80% corresponds to the acceptance of the random walk 3 hypothesis).

Finally, the results show that an important part of the power initially calculated for the procedure when starting from the Ljung-Box test corresponds to the acceptance of a wrong alternative, i.e., the real power is significantly lower than the initial power, mainly at the small sample sizes ( \(T = 25\) and 50). This extent of this loss of power decreases when \(T \ge 100\) , and at \(T \ge 250\) , the observed behaviour for high values of \(\alpha_{1}\) is the same as that described for the variance ratio case. Regardless, the real powers for the Ljung–Box case are lower than those for the variance ratio case.

Consequently, for the random walk 3 alternative (the only one that is true in this case), the procedure initiated from the BDS test is the most powerful.

5 Concluding comments

The methods traditionally applied to test the weak efficiency in a financial market, as the random walk model states, have serious limitations. They only test for a type of random walk or some necessary, but not sufficient, condition to accept the random walk hypothesis in one of its forms.

To address these limitations, a procedure that strategically combines traditional methods is proposed to detect whether a return series follows a specific type of random walk (1, 2 or 3). When the random walk hypothesis is rejected, the inefficient market is accepted, i.e. the market is predictable. In this context, future price changes can be predicted based on past price changes through a model of asset prices.

The proposed procedure is evaluated in the context of a random walk 1 against linearity and non-linearity alternatives using a Monte Carlo experiment. This procedure is applied starting from methods that test only a necessary, but not sufficient, condition for the fulfilment of the random walk 1 hypothesis (variance ratio test and Ljung–Box test) and from methods that directly test a particular type of random walk (BDS test and runs test).

The results allow us to conclude that, against linear correlation-only alternatives, the procedure performs best when starting from the Ljung–Box test. In this case, the real power of the procedure is higher than that when starting from any of the other tests, for any sample size, especially for larger ones ( \(T \ge 100\) ). In all cases, serious power distortions occur in the alternatives close to the null hypothesis (RW1). However, these distortions disappear as the sample size increase, except when the procedure is initiated from the BDS test (the aforementioned distortions remain for large sample sizes).

In contrast, against the random walk 3 alternative, the highest real powers for each sample size occur when the procedure is started from the BDS test. Again, all cases show poor real power in the alternatives close to the null hypothesis (random walk 1). These powers improve as the sample size increases, except in the case where the procedure is initiated from the runs test, which retains very low power against the RW3 alternative for all sample sizes (around the significance level \(\alpha = 0.05\) ).

Regarding the size of the procedure, all the cases analysed present empirical values very similar to the corresponding estimated nominal size (for a significance level of \(\alpha = 0.05\) ). In particular, the procedure initiated from the BDS test exhibits the greatest size distortions for small samples. However, there are no distortions when the procedure is started from the runs test, although its application is discouraged because its power for the random walk 3 alternative is poor.

The procedure introduced in this paper has been applied to evaluate the degree of fulfilment of the weak efficiency hypothesis in four European financial markets (Spain, Germany, France and Italy) from 1st January 2010 to 15th May 2020 (García-Moreno and Roldán 2021 ).

Currently, the authors are analysing the performance of the proposed procedure against other alternatives to the random walk hypothesis that are not considered in this work. They are also analysing the performance of the procedure when it combines formal and non-formal statistical inference techniques to accommodate random walk 2.

Availability of data and material

Not applicable.

Code availability

The additional restrictions that are usually imposed correspond to the conditions of random walks 1 or 2, which fulfil the martingale hypothesis in a stronger sense (see Sects.  2.2 .a and 2.2.b).

Filter rules and technical analysis are two forms of empirical testing of the RW2 hypothesis that, by not making use of formal statistical inference, are considered "economic" tests of the random walk 2 hypothesis.

Methods that try to detect non-linear relationships are called non-linear methods regardless of whether they are sensitive to the existence of linear relationships.

According to Hinich et al. ( 2005 ), economic systems are non-linear, and if this non-linearity is considerable, it is erroneous to forecast based on an estimated linear approximation. Therefore, the authors claim that testing for non-linearity is a means of validating the linearity of a system.

Since the testing of the RW2 hypothesis is not based on formal statistical inference, only the random walk types 1 and 3 can be considered in the experiment.

The variance ratio test suggested by Lo and MacKinlay ( 1988 ) has two versions: one that allows testing the uncorrelation of a return series and another that tests if the aforementioned series follows a random walk 3. The application of one version or another will depend on what the procedure requires at all times. If the variance ratio test, in any of its versions, leads to contradictory decisions for different values of parameter k (first values of the return series), the final decision is based on the global test proposed by Chow and Denning ( 1993 ).

Nonparametric test proposed by Brock, Dechert, LeBaron and Scheinkman (1996) for testing the null hypothesis that a series is independent and identically distributed. It is based on the correlation integral developed by Grassberger and Procaccia ( 1983 ), which is a measure of spatial correlation between two points of an m -dimensional space. We consider m  = 2, 3, 4 and 5 since Monte Carlo experiments have shown that the BDS statistic has good properties for \(m \le 5\) , regardless of the sample size (Kanzler 1999 ).

All the simulations were performed using routines developed in EViews 8 with the random number generator contained therein.

For an ARCH(1) model such as (7), the 4th-order moment of \(r_{t}\)

\(E\left[ {r_{t}^{4} } \right] = \frac{{3\alpha_{0}^{2} (1 + \alpha_{1} )}}{{(1 - \alpha_{1} )(1 - 3\alpha_{1}^{2} )}}\)

will be finite and positive if \(\alpha_{1}^{2} \in [0,1/3]\) .

From Model (7) and the conditions under which it is defined, the uncorrelation of \(r_{t}\) is derived \(Cov(r_{t} ,r_{t - k} ) = E\left[ {(h_{t} \varepsilon_{t} )(h_{t - k} \varepsilon_{t - k} )} \right] = E\left[ {(h_{t} h_{t - k} )(\varepsilon_{t} \varepsilon_{t - k} )} \right] = E\left[ {h_{t} h_{t - k} } \right]E\left[ {\varepsilon_{t} \varepsilon_{t - k} } \right] = 0\) \(\forall k \ne 0\) (where it has been taken into account that \(E\left[ {r_{t} } \right] = E\left[ {h_{t} \varepsilon_{t} } \right] = E\left[ {h_{t} } \right]E\left[ {\varepsilon_{t} } \right] = 0\) \(\forall t\) ), just as the non-linear relationship among the variables of the process \(r_{t}\) since \(r_{t}^{2}\) has a first-order autoregressive structure \(r_{t}^{2} = r_{t}^{2} + h_{t}^{2} - h_{t}^{2} = h_{t}^{2} + r_{t}^{2} - h_{t}^{2} = \alpha_{0} + \alpha_{1} r_{t - 1}^{2} + r_{t}^{2} - h_{t}^{2} = \alpha_{0} + \alpha_{1} r_{t - 1}^{2} + v_{t}\) being \(v_{t} = r_{t}^{2} - h_{t}^{2} = h_{t}^{2} \varepsilon_{t}^{2} - h_{t}^{2} = h_{t}^{2} \left( {\varepsilon_{t}^{2} - 1} \right)\) white noise.

Abdullah AÇIK, Baran E, Ayaz İS (2020) Testing the efficient market hypothesis: a research on stocks of container shipping companies. Glob J Econ Bus Stud 9(17):1–12

Google Scholar  

Adaramola AO, Obisesan OG (2021) Adaptive market hypothesis: evidence from nigerian stock exchange. J Dev Areas 55(2):1–16

Article   Google Scholar  

Alexeev V, Tapon F (2011) Testing weak form efficiency on the Toronto Stock Exchange. J Empir Financ 18(4):661–691

Alvarez-Ramirez J, Escarela-Perez R (2010) Time-dependent correlations in electricity markets. Energy Econ 32(2):269–277

Appiah-Kusi J, Menyah K (2003) Return predictability in African stock markets. Rev Financ Econ 12(3):247–270

Bachelier L (1900) Théorie de la spéculation. In: Annales Scientifiques de l É.N.S., 3 e série, tome 17, pp 21–86

Bailey RE (2005) The economics of financial markets. Cambridge University Press, New York

Book   Google Scholar  

Brock WA, Dechert WD, Scheinkman JA (1987) A test for independence based on the correlation dimension. University of Wisconsin at Madison, Department of Economics Working Paper

Brock WA, Dechert WD, Lebaron B, Scheinkman JA (1996) A test for independence based on the correlation dimension. Econ Rev 15(3):197–235

Article   MathSciNet   MATH   Google Scholar  

Buguk C, Brorsen BW (2003) Testing weak-form market efficiency: evidence from the Istanbul Stock Exchange. Int Rev Financ Anal 12(5):579–590

Campbell JY, Lo AW, Mackinlay AC (1997) The econometrics of financial markets. Princeton University Press, New Jersey

Book   MATH   Google Scholar  

Chen CW, Huang CS, Lai HW (2009) The impact of data snooping on the testing of technical analysis: an empirical study of Asian stock markets. J Asian Econ 20(5):580–591

Cheong CW, Nor AHSM, Isa Z (2007) Asymmetry and longmemory volatility: some empirical evidence using GARCH. Physica A 373:651–664

Chow KV, Denning KC (1993) A simple multiple variance ratio test. J Econ 58(3):385–401

Chu J, Zhang Y, Chan S (2019) The adaptive market hypothesis in the high frequency cryptocurrency market. Int Rev Financ Anal 64:221–231

Chuang WI, Liu HH, Susmel R (2012) The bivariate GARCH approach to investigating the relation between stock returns, trading volume, and return volatility. Glob Financ J 23(1):1–15

DePenya FJ, Gil-Alana LA (2007) Serial correlation in the Spanish stock market. Glob Financ J 18(1):84–103

Dicle MF, Beyhan A, Yao LJ (2010) Market efficiency and international diversification: Evidence from India. Int Rev Econ Financ 19(2):313–339

Fama EF (1965) The behavior of stock-market prices. J Bus 38(1):34–105

Fama EF (1970) Efficient capital markets: a review of theory and empirical work. J Financ 25(2):383–417

Fama EF (1991) Efficient capital markets: II. J Financ 46(5):1575–1617

Fisher L (1966) Some new stock-market indexes. J Bus 39:191–225

French KR (1980) Stock returns and the weekend effect. J Financ Econ 8(1):55–69

García-Moreno MB, Roldán JA (2021) Análisis del grado de eficiencia débil en algunos mercados financieros europeos. Primer impacto del COVID-19. Revista de Economía Mundial 59:243–269

Grassberger P, Procaccia I (1983) Characterization of strange attractors. Phys Rev Lett 50(5):346–349

Gultekin MN, Gultekin NB (1983) Stock market seasonality: international evidence. J Financ Econ 12:469–481

Hasan T, Kadapakkam PR, Ma Y (2003) Tests of random walk for Latin American stock markets: additional evidence. Lat Am Bus Rev 4(2):37–53

Hinich MJ, Mendes EM, Stone L (2005) A comparison between standard bootstrap and Theiler's surrogate methods. University of Texas, Austin

Ho KY, Zheng L, Zhang Z (2012) Volume, volatility and information linkages in the stock and option markets. Rev Financ Econ 21(4):168–174

Hoque HA, Kim JH, Pyun CS (2007) A comparison of variance ratio tests of random walk: a case of Asian emerging stock markets. Int Rev Econ Financ 16(4):488–502

Jamaani F, Roca E (2015) Are the regional Gulf stock markets weak-form efficient as single stock markets and as a regional stock market? Res Int Bus Financ 33:221–246

Jayasinghe P, Tsui AK (2008) Exchange rate exposure of sectoral returns and volatilities: evidence from Japanese industrial sectors. Jpn World Econ 20(4):639–660

Juana J (2017) Foreign exchange market efficiency in Botswana. Rev Econ Bus Stud 10(1):103–125

Kanzler L (1999). Very fast and correctly sized estimation of the BDS statistic. Unpublished manuscript. Department of Economics, University of Oxford

Khan W, Vieito JP (2012) Stock exchange mergers and weak form of market efficiency: the case of Euronext Lisbon. Int Rev Econ Financ 22(1):173–189

Khanh P, Dat P (2020) Efficient market hypothesis and calendar effects: empirical evidences from the Vietnam stock markets. Accounting 6(5):893–898

Kołatka M (2020) Testing the adaptive market hypothesis on the WIG Stock Index: 1994–2019. Prace Naukowe Uniwersytetu Ekonomicznego We Wrocławiu 64(1):131–142

Kumar D (2018) Market efficiency in Indian exchange rates: adaptive market hypothesis. Theor Econ Lett 8(9):1582–1598

Leković M (2018) Evidence for and against the validity of efficient market hypothesis. Econ Themes 56(3):369–387

Article   MathSciNet   Google Scholar  

Lim KP, Brooks RD, Hinich MJ (2008a) Nonlinear serial dependence and the weak-form efficiency of Asian emerging stock markets. J Int Financ Markets Inst Money 18(5):527–544

Lim KP, Brooks RD, Kim JH (2008b) Financial crisis and stock market efficiency: empirical evidence from Asian countries. Int Rev Financ Anal 17(3):571–591

Ljung GM, Box GEP (1978) On a measure of lack of fit in time series models. Biometrika 65(2):297–303

Article   MATH   Google Scholar  

Lo AW, MacKinlay AC (1988) Stock market prices do not follow random walks: evidence from a simple specification test. Rev Financ Stud 1(1):41–66

Marshall BR, Young MR, Rose LC (2006) Candlestick technical trading strategies: can they create value for investors? J Bank Finance 30(8):2303–2323

Moore AB (1964) Some characteristics of changes in common stock pices. In: Cootner P (ed) The random character of stock market prices. MIT Press, Cambridge, pp 262–296

Nti IK, Adekoya AF, Weyori BA (2020) A systematic review of fundamental and technical analysis of stock market predictions. Artif Intell Rev 53(4):3007–3057

Omane-Adjepong M, Alagidede P, Akosah NK (2019) Wavelet time-scale persistence analysis of cryptocurrency market returns and volatility. Physica A 514:105–120

Picasso A, Merello S, Ma Y, Oneto L, Cambria E (2019) Technical analysis and sentiment embeddings for market trend prediction. Expert Syst Appl 135:60–70

Potvin JY, Soriano P, Vallée M (2004) Generating trading rules on the stock markets with genetic programming. Comput Oper Res 31(7):1033–1047

Righi MB, Ceretta PS (2013) Risk prediction management and weak form market efficiency in Eurozone financial crisis. Int Rev Financ Anal 30:384–393

Roberts HV (1967) Statistical versus clinical prediction of the stock market. Unpublished paper presented at The Seminar on the Analysis of the Security Prices, University of Chicago

Roseff M, Kinney W (1976) Capital market seasonality: the case of stock market returns. J Financ Econ 3:379–402

Rossi M, Gunardi A (2018) Efficient market hypothesis and stock market anomalies: empirical evidence in four European countries. J Appl Bus Res 34(1):183–192

Ryaly VR, Kumar RK, Urlankula B (2014) A study on weak-form of market efficiency in selected Asian stock markets. Indian J Finance 8(11):34–43

Samuelson PA (1965) Proof that properly anticipated prices fluctuate randomly. Ind Manag Rev 6(2):41–49

Sánchez-Granero MA, Balladares KA, Ramos-Requena JP, Trinidad-Segovia JE (2020) Testing the efficient market hypothesis in Latin American stock markets. Physica A 540:1–14

Shynkevich A (2012) Short-term predictability of equity returns along two style dimensions. J Empir Financ 19(5):675–685

Stoian A, Iorgulescu F (2020) Fiscal policy and stock market efficiency: an ARDL bounds testing approach. Econ Model 90:406–416

Theil H, Leenders CT (1965) Tomorrow on the Amsterdam stock exchange. J Bus 38:277–284

Tiwari AK, Aye GC, Gupta R (2019) Stock market efficiency analysis using long spans of data: a multifractal detrended fluctuation approach. Financ Res Lett 28:398–411

Yao H, Rahaman ARA (2018) Efficient market hypothesis and the RMB-dollar rates: a nonlinear modeling of the exchange rate. Int J Econ Finance 10(2):150–160

Download references

Open Access funding provided thanks to the CRUE-CSIC agreement (University of Cordoba/CBUA) with Springer Nature.

Author information

Authors and affiliations.

Department of Statistics, Econometrics and Operational Research, University of Cordoba, Córdoba, Spain

José A. Roldán-Casas & Mª B. García-Moreno García

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to José A. Roldán-Casas .

Ethics declarations

Conflict of interest.

We have no conflicts of interest to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Figs. 9 , 10 , 11 , 12 , 13 , 14 , 15 and 16 .

figure 9

Procedure started from the BDS test: power vs. real power (linear correlation-only alternatives).

figure 10

Procedure started from the runs test: power vs. real power (linear correlation-only alternatives).

figure 11

Procedure started from the variance ratio test: power vs. real power (linear correlation-only alternatives).

figure 12

Procedure started from the Ljung–Box test: power vs. real power (linear correlation-only alternatives).

figure 13

Procedure started from the BDS test: power vs. real power (non-linear alternatives only).

figure 14

Procedure started from the runs test: power vs. real power (non-linear alternatives only).

figure 15

Procedure started from the variance ratio test: power vs. real power (non-linear alternatives only).

figure 16

Procedure started from the Ljung–Box test: power vs. real power (non-linear alternatives only).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Roldán-Casas, J.A., García-Moreno García, M.B. A procedure for testing the hypothesis of weak efficiency in financial markets: a Monte Carlo simulation. Stat Methods Appl 31 , 1289–1327 (2022). https://doi.org/10.1007/s10260-022-00627-4

Download citation

Accepted : 19 February 2022

Published : 31 March 2022

Issue Date : December 2022

DOI : https://doi.org/10.1007/s10260-022-00627-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Financial markets
  • Random walk
  • Sequential testing strategy
  • Monte Carlo experiment

Mathematics Subject Classification

  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

An Algorithm for Testing the Efficient Market Hypothesis

Affiliation Financial Engineering Section, Swiss Finance Institute at École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

* E-mail: [email protected]

Affiliation Department of Finance, Bucharest University of Economic Studies, Bucharest, Romania

  • Ioana-Andreea Boboc, 
  • Mihai-Cristian Dinică

PLOS

  • Published: October 29, 2013
  • https://doi.org/10.1371/journal.pone.0078177
  • Reader Comments

Table 1

The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

Citation: Boboc I-A, Dinică M-C (2013) An Algorithm for Testing the Efficient Market Hypothesis. PLoS ONE 8(10): e78177. https://doi.org/10.1371/journal.pone.0078177

Editor: Rodrigo Huerta-Quintanilla, Cinvestav-Merida, Mexico

Received: July 1, 2013; Accepted: September 13, 2013; Published: October 29, 2013

Copyright: © 2013 Boboc, Dinică. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: This work was co-financed from the European Social Fund through Sectorial Operational Programme Human Resources Development 2007–2013, project number POSDRU/107/1.5/S/77213, Ph.D. for a career in interdisciplinary economic research at the European standards. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

This paper describes a genetic algorithm used to create a trading system, consisting of several rules for opening and closing trading positions in the FX market. The aim of this study is to assess the weak form efficiency of the EUR/USD market. Our paper shows that the distribution of the outcome in the out-of-sample period is uniformly distributed around an average close to 0. This provides evidence that all the information available in the EUR/USD market is reflected in the price and no arbitrage can be made by trading this currency pair based on historical information.

Our findings should capture the attention of investors in the FX market that base their decisions on technical analysis signals. The results are in the support of previous academic literature that in general provides evidence for the impossibility of forecasting financial market movements by only analyzing historical prices.

Algorithmic trading has evolved exponentially in recent years because of more rapid reactions to temporary mispricing and easier price management from several markets [1] . As compared to human dealers, computers can learn from thousands of sources of information simultaneously and avoid emotional influence.

Technical analysis is a methodology of forecasting price movements by analyzing past market data [2] . The efficient market hypothesis (EMH) [3] contradicts this approach by stating that all public information in the market is immediately reflected in prices; therefore, no arbitrage can be made based on historical data. Time series is split in two parts. The trading system with several parameters is applied in-sample over the training period and strategies that generate the highest returns are selected and tested over the following period (out-of-sample). The objective of the system is to achieve high returns over the testing period. The impossibility of finding a good performing strategy over both training and testing period would support the EMH.

The research proceeds as follows. This section offers a review of the existing literature regarding the tests on the efficient market hypothesis, studies on the performance of technical analysis based on several indicators as well as the improvement of trading strategies using genetic algorithms. Section 2 presents the database used for testing the efficiency of the system and the methodology involved. Section 3 discusses empirical findings of our analysis and concludes. One currency pair has been used, EUR/USD.

Efficient Market Hypothesis

EMH, developed by Eugene Fama [3] , assumes that all the information in the market at a specific moment is reflected in the prices and therefore market participants cannot consistently perform better than the average market returns on a risk-adjusted basis. However, empirical findings have shown that the EMH may be questionable. Hasan et al. [4] find inefficiencies in the Dhaka stock market. They notice that factors like return, market capitalization, book-to-market ratio and market value influence the share returns. Moreover, similar features such as thin trading, volatility, small number of securities listed and investors’ attitude towards investment strategy characterize DSE, as well as other emerging markets.

Several studies find volatility in the level of efficiency over time and among different markets. Alvarez-Ramirez et al. [5] observe that the efficiency degree of financial markets changes over time. The relative efficiency of the US stock market varied over 1929–2012, with a decline in the late 2000s induced by the economic recession. The most efficient period was 1973–2003. Another study showing that the degree of inefficiency is not constant over time is made in [6] . IRR/USD market was inefficient over 2005–2010 and this may be caused by the negative long-range dependence, meaning that if the exchange rate is up it is likely to go down in the close future. A similar result is revealed by Kim et al. [7] . They provide evidence that supports time-varying return predictability of the Dow Jones Industrial Average index over the period 1900–2009. While the market seems efficient during market crashes, economic and political crises induce predictability in returns. The efficiency of the Asian stock markets varies with the level of equity market development [8] . The developed emerging markets are found to be weak-form efficient, while the secondary emerging markets are characterized by inefficiencies.

Dragota et al. [9] could not reject the weak-form EMH for the Bucharest Stock Exchange by applying Multiple Variance Ratio test to random walk hypothesis. For the same market, Armeanu and Balu [10] tested the efficiency of the Markowitz model, emphasizing the benefits of portfolio diversification. Charles et al. [11] evaluated the predictability of exchange rate returns and found that while they are unpredictable most of the times, return predictability may appear with coordinated central bank interventions and financial crises. The Chinese stock markets efficiency is investigated in [12] . The results show that Class A shares, which are generally available for domestic investors, seem efficient, while Class B shares, eligible for foreigners, are significantly inefficient. Trolle and Schwartz [13] , using a database of 11 years of data for crude oil and natural gas futures and options traded on NYMEX, found that it is difficult to explain the variation and the level in energy variance risk premia using systematic factors such as the returns on commodities or equity market portfolios or specific factors such as inventories.

Technical Analysis

Most automated trading systems use several indicators in order to generate purchase and sale recommendations [14] . One found that the best indicator for companies with high capitalization is RSI and the best for small capitalization companies is Momentum. Moreover, indicators that do not give many trade signals, such as Momentum, are more suitable when the transaction costs are high. One research assessed the performance of technical analysis in the US equity market for some technical industry sectors and small caps, over the period 1995–2010 [15] . Results show that the strategies are capable of outperforming the buy-and-hold strategy after adjusting for data-snooping bias and without transaction costs in the first half of the sample period. However, the same strategies are not able to produce superior performance over the second half. Success in the period 1995–2002 is tempered when introducing transaction costs. Moreover, the forecast of short-term return became weaker in the recent years, this being consistent with the EMH in the equity market. A positive performance of technical analysis is generated by applying moving average trading rules on 16 European stock markets over the period 1990–2006 [16] . A moving average trading rule combined with a strategy that at buy signals recommends investing in the stock market, while at sell signals recommends investing in the money market outperforms the buy-and-hold strategy over the sample period.

In [17] is found that one can achieve performing returns using trading strategies only if he has full information of the stock price change for the future. However, if the future information is not accurate, it can be useless in increasing profits. Moreover, a search in a strategy space to get high profit is impossible and this is based on lack of future information of a company.

Trading strategies have been mainly based on technical analysis in the commodity futures market [18] , [19] , [20] and foreign exchange market [21] , [22] , [23] , [24] . Evaluation of the technical analysis’ performance in the equity markets has generally been done using market indices such as Dow Jones Industrial Average [25] , [26] , S&P 500 [26] , [27] , NYSE and NASDAQ [26] , [28] , [29] or Russell 2000 [26] , [27] , [29] . Technical analysis has evolved beyond filter and moving averages rules, now including psychological barriers such us resistance and support levels [30] ; [31] .

Genetic Algorithms

In recent years, individuals and companies have developed algorithms that try to improve profitability of trading rules. Genetic algorithms (GA) represent a class of optimization techniques that generate solutions to search problems and quickly adapt to changing environments. GA were developed by Holland [32] and they simulate the process of natural evolution. As the species evolve through genetic processes such as selection, crossover and mutation, GA create classes of solutions that evolve over some generations through analogous processes in order to generate one solution with the best fit to the specific problem [33] . Algorithms start by creating some strategies with specific parameters. In the following steps, they dynamically change their parameters in order to achieve higher profits.

In a natural evolution process, species change over time. New organisms are born by recombination between members. They inherit parents’ traits and are also influenced by environment conditions. The natural selection process comes from the fact that while the population grows, the organism need to struggle for resources. Therefore, only the organisms that possess well-suited characteristics for this struggle will bring more offspring to the new generation.

Holland [32] developed a way in which the natural evolution process might be imported into algorithms that offer solutions to search problems. GA are very suitable for managing financial markets because these represent a continuous changing environment and trading strategies need to adapt to the new conditions. The search problem is represented by finding a strategy that achieves positive excess returns when applied to a specific sample. GA generate many strategies and those well fitted (according to a specific function that can be mean return, Sharpe ratio or one that takes into account also environment conditions) are selected for passing in the new generation and for recombining to generate new strategies.

Mendes et al. [34] developed a system based on a genetic algorithm that optimizes a set of rules to obtain a profitable strategy to trade EUR/USD and GBP/USD. The system generates individuals defined by ten mandatory and optional rules, from which five of them decide whether opening a long/short position or not at current price in the market and the other five decide when to close an opened position. The rules contain 31 parameters that evolve in many generations through selection, crossover and mutation and, based on return and risk, the individual that had the highest performance is selected and tested in the next period. Results have shown that, considering transaction costs, the best individuals in the training series were often not able to achieve positive results in the out-of-sample test series. Dempster and Jones [2] created an adaptive trading system that uses genetic programming. They used USD/GBP spot foreign exchange tick data from 1994 to 1997. The algorithm is applied on out-of-sample data to provide new rules and a feedback system helps rebalancing the rule portfolio. The genetic algorithm is profitable even in the presence of transaction costs.

Another study about the performance of the genetic algorithms for FX markets has been developed in [35] . The authors show that the system often returned profit when the testing period was consecutive to the training period. They concluded that the success of the system depended on the similarity in the trends of the two periods. Also, genetic algorithms succeeded in finding performing trading rules for six exchange rates over the period 1981–1995 [36] .

One bias that may appear when one tests a large number of strategies on the same sample is the data-snooping bias. As explained in [37] , data-snooping bias appears when a set of data is used more than once for the purpose of model selection. Strategies that generate positive returns on a specific sample may be performing only due to luck and do not have a genuine predictive power. Therefore, when applied to a different sample, the results can be negative and investors may suffer important losses. A solution to this problem is the Bootstrap Reality Check developed by White [38] that relies on resampling the return series in order to give a reliable verdict regarding the genuine performance of the strategy.

Materials and Methods

The database used in this paper is the tick-by-tick series of EUR/USD currency pair over the year 2012 (ratedata.gaincapital.com). Time series with frequency of 60 minutes were used for testing the performance of the genetic algorithm.

Time series have been separated in two sets: the training period and the testing period. The first one considers the first six months of 2012 and is used for finding the strategy that achieves the highest performance. The second set tests the performance of the strategy found in the first step.

The algorithm is applied 100 times on the training time series, in order to find the characteristics of the best 100 individuals. We then assess the performance of these individuals on the out-of-sample series.

The hourly data extracted from the tick-by-tick data also consider the minimum and maximum tick for both bid and ask quotes. We needed this information to establish if the take-profit or stop-loss level had been reached during that period.

efficient market hypothesis python

Further, we start the description of the algorithm with the definition of an individual.

The Individual Characteristics

In a genetic algorithm for setting a FX trading system, each individual is represented by a set of technical analysis rules. Each rule can be considered as a chromosome, while the parameters that define a rule are considered genes. Here we consider the individual as being defined by 6 chromosomes (rules) and 24 genes (parameters). The rules are divided in 4 rules that set the conditions for opening a position and the rest 2 rules are those that define the conditions for exiting the position. Each rule contains a Boolean gene that can activate or deactivate the rest of the rule’s genes.

Following, are described the rules (chromosomes).

Rules for Position Opening

Rule 1. Exponential Moving Average: EMA(n). Genes:

  • 1. Boolean_EMA –takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it.
  • 2. Nr_periods_EMA, noted n , takes values between 5 and 90.

The rule generates trades as follows. If there is no current open position, then a long position is generated only if the close price is higher than EMA(n) and a short position is generated only if the close price is lower than EMA(n) . If a position is currently open, then this rule is ignored.

Rule 2. Moving Average Convergence Divergence: MACD(p,q,m). Genes:

  • 3. Boolean_MACD – takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it.
  • 4. Periods_short_MA, noted p takes values between 5 and 90
  • 5. Periods_long_MA, noted q - takes values between 10 and 100, with the restriction q > p
  • 6. Periods_signal_MACD, noted m - takes values between 5 and 25 and is the moving average of the difference between the short and the long moving average.
  • 7. Boolean_signal - takes the values 0 or 1. Value 0 sets the value for the signal to 0. Basically, it transforms the MACD in a simple rule of moving averages crossover. Value 1 activates the signal. For the 0 value is attached a probability of occurrence of 25%, while for value 1 the probability is set to 75%

The trades are generated by this rule as follows. Firstly, if a position is already open, the rule is ignored. If there is no currently open position, then, if the Boolean_signal has the null value, the rule takes into consideration only the short and the long moving averages. Therefore, a long position is opened when the short moving average is higher than the long one and a short position is opened otherwise. If the Boolean_signal takes the value 1, the rule proceeds as follows. If the difference between the short moving average and the long one is higher than the value of the signal, then a long position is opened, while otherwise is opened a short position.

Rule 3. Relative Strength Index: RSI (n). Genes:

  • 8. Boolean_RSI – takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it.
  • 9. Periods_RSI, noted n , takes values between 5 and 50.
  • 10. Oversold_signal_RSI, noted p , takes values between 15 and 35.
  • 11. Overbought_signal_RSI, noted q , takes values between 65 and 85.
  • 12. Boolean_signal_RSI - takes the values 0 or 1. The use of this gene is described below.

The rule generates trades only if currently there is no open position. The trades are generated based on the Boolean_signal_RSI value as follows. When it takes the value 0, a long position is opened when the RSI value drops under p and a short position is opened when the RSI value rises over q . When it takes the value 1, a long position is opened when the RSI value rises over p and a short position is opened when the RSI value drops under q .

Rule 4. Filter(n). Genes:

  • 13. Boolean_Filter –takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it.
  • 14. Periods_Filter, noted n , takes values between 1 and 15.
  • 15. Increase_signal_Filter, noted p , takes values between 50 and 100 pips.
  • 16. Decrease_signal_Filter, noted q , takes values between 50 and 100 pips.
  • 17. Boolean_trader_type_Filter - takes the values 0 or 1.

This rule respects the same restriction as the rest of the three opening rules: if there is already a currently open position, the rule is ignored. The trades are generated based on the Boolean_ trader_type_Filter value as follows. Value 0 signals a trend follower (enters long if the price increases more than p pips or short if the price decreases more than q pips). Value 1 signals that the trader will enter long if the price decreases more than q pips or short if the price increases more than p pips.

For the above, a great importance have the Boolean genes that activate or deactivate the rules: 1, 3, 8 and 13. When all of them take the value 0, this means that the individual will never open a position (because no opening rule is active). In order to avoid such situations, that have a probability of occurrence of 6.25%, we proceed the following way. If these genes take all value 0, then we randomly change the value for one of them.

Moreover, if two or more of these genes take simultaneously the value 1, then a position is opened only if all the active rules give the same trading signal (to buy or to sell). Therefore, it is more probable that an individual that has only one active rule to trade more than an individual that has all the rules active.

As important as the rules that define the conditions to open a position are the rules used to exit that position, in order to take the profit or to cut the losses. Following are described these rules.

Rules for Exiting the Position

Rule 5. Fixed exit levels (p,s). Genes:

  • 18. Boolean_fixed_exit –takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it.
  • 19. TP_fixed, noted p , takes values between 15 and 150 pips
  • 20. SL_fixed, noted s , takes values between 10 and 100 pips

Opposite to the opening rules, the rules for exiting the position are active only when a position is opened. The above rule exits a long position if the price rises at least p pips (take profit) or drops at least s pips (stop loss). Accordingly, the rule exits a short position if the price drops at least p pips (take profit) or rises s pips (stop loss).

Rule 6. Trailing exit levels (p,s,q). Genes:

  • 21. Boolean_trailing_exit - takes the values 0 or 1. Value 0 deactivates the rule, while value 1 activates it. This gene is conditioned by gene number 18. If gene 18 takes value 0, then gene 21 takes value 1 and if gene 18 takes value 1, then gene 21 takes value 0.
  • 22. TP_trailing, noted p , takes values between 15 and 150 pips
  • 23. SL_trailing, noted s , takes values between 10 and 100 pips
  • 24. Trailing_level, noted q , takes values between 10 and 100 pips, under the restriction that q < p .

The above rule can be active only if a position is already open and rule 5 is not active. If a long position is already open and the price rises at least q pips, but less than p pips, the take profit and stop loss levels are updated, by increasing them with q pips. Continuing, if the price rises another q pips, but the new take profit level is not reached, then the stop loss and take profit levels are updated again, by increasing them with another q pips. The same procedure is followed until the stop loss is reached or during one period the take profit is hit. In the case of a short position, same methodology is used, with the difference that the stop loss and take profit levels are updated by decreasing them with q pips.

The Genetic Algorithm

After defining the individual, characterized by the rules for entering into position (based on the technical analysis indicators) and by the exit rules (take profit and stop loss), we proceed to the genetic algorithm, which consists in the following steps:

  • A population of 100 individuals is randomly generated.
  • We compute the profit or loss generated by each individual over the training period. Each individual is evaluated based on this measure.
  • The individuals are ranked based on the generated profit or loss.
  • Elitism. The most profitable individual is automatically passed to the new generation
  • Selection of parents. The probability of a given individual to become a parent for the new generation is based on its ranking. In order to increase the computational speed, we divided the individuals in 10 classes of fitness (profitability). First class contains the first 10 best-ranked individuals, the second class contains the individuals ranked 11th to 20th, while the 10th class contains the last 10 ranked individuals ( Table 1 ). For the individuals of the same class, we attach the same probability. In addition, the probability is higher for classes that contain better-ranked individuals (e.g. the first class will have attached a higher probability than the 10th class).
  • Crossover. Using the selection criteria described above, pairs of two parents are randomly chosen. Each pair of parents will create a new individual. In order to choose what genes from what parent will be passed to the new individual, a number n (where 1< n <24) is randomly generated. The new individual will receive the genes 1 to n from one parent and the genes n +1 to 24 from the other parent. The gene 21 will still depend on the gene 18. This way 80 individuals from the new generation are obtained.
  • Introduction of migrants. In order to increase the diversity and to avoid fast convergence, we randomly generate 19 individuals in the new generation.
  • The new generation becomes the actual generation and the steps 2–4 are repeated.
  • We repeat the procedure from steps 2–5 until we reach 30 such iterations.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0078177.t001

By executing the genetic algorithm, is obtained one individual, the result of the evolution. We repeat the genetic algorithm for 100 times in order to obtain 100 such individuals (sets of trading rules). Then, these 100 best individuals from the training period will be evaluated on the testing period. The evaluation procedure consists in assessing the profit or loss (expressed in pips) generated by the each individual in the testing period. The results obtained are attached in the Supporting Information file.

GA was developed under Eclipse Integrated Development Environment (IDE) version Helios Service Release 1 using Java Development Kit (JDK) version “1.7.0_21″. Three Java Archive (JAR) libraries have been added to the project: JFreeChart ( http://sourceforge.net/projects/jfreechart/files/1.%20JFreeChart/1.0.14/ ) and JCommon ( http://sourceforge.net/projects/jfreechart/files/3.%20JCommon/1.0.17/ ), both used to plot the cumulative profits of strategies and CSV_JAR (( http://www.java2s.com/Code/JarDownload/opencsv/opencsv-2.3.jar.zip )) used to read the data from comma-separated values (CSV) files.

Results and Discussion

To analyze the results, we firstly discuss the evolution of EUR/USD in the training and testing period ( Fig. 1 ). During the training period, a short upward movement, followed by a sideways evolution, firstly characterizes the exchange rate. Starting with May 2012, a strong downward trend is set. The testing period starts with a continuation of the downward trend, followed by a reversal and an upward trend in August 2012. The final part of the testing period is characterized by a sideways evolution of the EUR/USD exchange rate. Both training and testing period contain price movements in trend or sideways. Therefore, it is expected that the rules that perform relatively well in both types of markets (trending and sideways) will obtain good results in both periods.

thumbnail

Figure 1 represents the evolution of the EUR/USD pair over the year 2012. The first half shows the training period, while the second shows the testing period.

https://doi.org/10.1371/journal.pone.0078177.g001

The cumulative profit exhibits an upward trend on the training period for all the 100 best individuals ( Fig. 2 ). The increase in the cumulative profit does not have important variations, showing that the individuals are well fitted on the training period. However, on the testing period, the cumulative profit seems uniformly distributed around the null value and its dispersion increases with time ( Fig. 3 ). The individuals that performed best on the training sample are not able to achieve similar results on the testing sample, providing evidence that EUR/USD market is weak-form efficient.

thumbnail

Figure 2 is the outcome obtained in the first simulation by applying the genetic algorithm on the EUR/USD pair over the training period (first half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g002

thumbnail

Figure 3 is the outcome obtained in the first simulation by applying the genetic algorithm on the EUR/USD pair over the testing period (second half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g003

We made two more simulations of the program in order to verify the consistency of our results and the parameters of the generated individuals are attached in the Materials S1 file, together with those of the first simulation. In the case of the second simulation, the results for the training period are very similar to those obtained in the initial one ( Fig. 4 ). In addition, the cumulative profit over the testing period exhibits the same pattern of the first simulation ( Fig. 5 ). By running the third simulation, the results are very similar ( Fig. 6 , Fig. 7 ). Therefore, these simulations validate the initial results that the best performers over the training period are not able to achieve similar results over the testing period. Our results are consistent with those obtained by Mendes et al. [34] , suggesting the weak-form efficiency of the EUR/USD market.

thumbnail

Figure 4 is the outcome obtained in the second simulation by applying the genetic algorithm on the EUR/USD pair over the training period (first half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g004

thumbnail

Figure 5 is the outcome obtained in the second simulation by applying the genetic algorithm on the EUR/USD pair over the testing period (second half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g005

thumbnail

Figure 6 is the outcome obtained in the third simulation by applying the genetic algorithm on the EUR/USD pair over the training period (first half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g006

thumbnail

Figure 7 is the outcome obtained in the third simulation by applying the genetic algorithm on the EUR/USD pair over the testing period (second half of the year 2012).

https://doi.org/10.1371/journal.pone.0078177.g007

Next, we computed the statistics of all the 300 generated individuals for the training and testing periods. Statistics with and without transaction costs are computed. Results are similar in both cases.

Statistics on the training sample show that the minimum, maximum and average cumulative profit are all positive and high ( Table 2 ). This happens because each selected individual is the most profitable from a set of 3000 individuals. Therefore, their outcome is predictable high.

thumbnail

https://doi.org/10.1371/journal.pone.0078177.t002

The second period is a robustness test for the strategies found in the first period. The average cumulative profit at the end of the testing period is negative, but close to 0, being consistent with efficiency hypothesis that no arbitrages can be made using the winning strategies from period 1 ( Table 3 ). In addition, the variability of the outcomes is higher in the testing period (the standard deviation is almost double in the testing period than in the training one). The values of the Skewness and Kurtosis statistics provide evidence that the profit distribution over the testing period may be normal. The empirical distribution plotted in Fig. 8 shows that the profits follow a distribution close to the normal one, but it is skewed from the standard normal distribution due to its negative average.

thumbnail

Figure 8 shows the distribution of the outcome obtained in all the three simulations on the testing period. The normal distribution with mean 0 and standard deviation equal to the one of the empirical distribution of the profits is also represented.

https://doi.org/10.1371/journal.pone.0078177.g008

thumbnail

https://doi.org/10.1371/journal.pone.0078177.t003

efficient market hypothesis python

A frequent problem met in the case of technical trading rules is the data-snooping bias. It may appear when more strategies are tested on the same sample. In this way, a rule may be performing in a period only due to luck. Therefore, when it is applied to another period, it generates negative returns. In the literature, a data-snooping test is applied to check for the validity of good performance. In our case, the out-of-sample results are distributed around 0, showing that in the case of the EUR/USD market one cannot find an outperforming strategy based on historical prices. Therefore, in the absence of a consistently profitable strategy (genuine or due to luck), the data-snooping test is not needed in our algorithm.

Concluding, our results show that the hypothesis of weak-form efficiency cannot be rejected in the case of EUR/USD market. Of course, this does not necessarily mean that one cannot prove the market inefficiency by finding a set of rules that consistently achieve profits. However, finding this set of rules represents a difficult task. We consider that our main results suggest that an investor should carefully analyze before taking speculative positions based on technical indicators and computer-based algorithms because there are higher chances to loose on the long-run. The fact that a sophisticated algorithm was not able to achieve sustainable profits supports our remark.

We recommend as future research adding some filters to the trading strategies in order to avoid false signals. For example, a strategy may achieve better results if the investor enters a position only after receiving the same signals for several periods. The same filter can be applied for the exit rules. Moreover, if some strategies are found to be performing, a data-snooping test should be applied in order to check their genuine predictive power.

Supporting Information

Materials s1..

This file contains the parameters of the individuals generated by the genetic algorithm. There are three sheets, each one containing the parameters (genes) of the individuals generated in each simulation. The first sheet contains the genes’ values of the 100 individuals generated by the first simulation. The second and the third sheet contain the genes’ values of the individuals generated by the additional two simulations.

https://doi.org/10.1371/journal.pone.0078177.s001

Acknowledgments

We would like to thank the two reviewers for their insightful comments and suggestions.

Author Contributions

Conceived and designed the experiments: IAB MCD. Performed the experiments: IAB MCD. Analyzed the data: IAB MCD. Contributed reagents/materials/analysis tools: IAB MCD. Wrote the paper: IAB MCD.

  • View Article
  • Google Scholar

BUS614: International Finance

efficient market hypothesis python

Market Efficiency

There are generally two theories to assist pricing. The Efficient Market Hypothesis (EFM) and the Behavioural Finance Theory. Understanding the limitations of each of the theories is critical. Read the three concepts on this page to have a comprehensive understanding of EFM. What are the limitations of the EMH?

The Efficient Market Hypothesis

The EMH asserts that financial markets are informationally efficient with different implications in weak, semi-strong, and strong form.

Learning Objective

Differentiate between the different versions of the Efficient Market Hypothesis

  • In weak-form efficiency, future prices cannot be predicted by analyzing prices from the past.
  • In semi-strong-form efficiency, it is implied that share prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information.
  • In strong-form efficiency, share prices reflect all information, public and private, and no one can earn excess returns.

A stock or commodity market analysis technique which examines only market action, such as prices, trading volume, and open interest.

An analysis of a business with the goal of financial projections in terms of income statement, financial statements and health, management and competitive advantages, and competitors and markets.

  • insider trading Buying or selling securities of a publicly held company by a person who has privileged access to information concerning the company's financial condition or plans.

The efficient-market hypothesis (EMH) asserts that financial markets are "informationally efficient". In consequence of this, one cannot consistently achieve returns in excess of average market returns on a risk-adjusted basis, given the information available at the time the investment is made.

There are three major versions of the hypothesis: weak, semi-strong, and strong.

  • The weak-form EMH claims that prices on traded assets (e.g., stocks, bonds, or property) already reflect all past publicly available information.
  • The semi-strong-form EMH claims both that prices reflect all publicly available information and that prices instantly change to reflect new public information.
  • The strong-form EMH additionally claims that prices instantly reflect even hidden or "insider" information.

11.5 Efficient Markets

Learning outcomes.

By the end of this section, you will be able to:

  • Understand what is meant by the term efficient markets .
  • Understand the term operational efficiency when referring to markets.
  • Understand the term informational efficiency when referring to markets.
  • Distinguish between strong, semi-strong, and weak levels of efficiency in markets.

Efficient Markets

For the public, the real concern when buying and selling of stock through the stock market is the question, “How do I know if I’m getting the best available price for my transaction?” We might ask an even broader question: Do these markets provide the best prices and the quickest possible execution of a trade? In other words, we want to know whether markets are efficient. By efficient markets , we mean markets in which costs are minimal and prices are current and fair to all traders. To answer our questions, we will look at two forms of efficiency: operational efficiency and informational efficiency.

Operational Efficiency

Operational efficiency concerns the speed and accuracy of processing a buy or sell order at the best available price. Through the years, the competitive nature of the market has promoted operational efficiency.

In the past, the NYSE (New York Stock Exchange) used a designated-order turnaround computer system known as SuperDOT to manage orders. SuperDOT was designed to match buyers and sellers and execute trades with confirmation to both parties in a matter of seconds, giving both buyers and sellers the best available prices. SuperDOT was replaced by a system known as the Super Display Book (SDBK) in 2009 and subsequently replaced by the Universal Trading Platform in 2012.

NASDAQ used a process referred to as the small-order execution system (SOES) to process orders. The practice for registered dealers had been for SOES to publicly display all limit orders (orders awaiting execution at specified price), the best dealer quotes, and the best customer limit order sizes. The SOES system has now been largely phased out with the emergence of all-electronic trading that increased transaction speed at ever higher trading volumes.

Public access to the best available prices promotes operational efficiency. This speed in matching buyers and sellers at the best available price is strong evidence that the stock markets are operationally efficient.

Informational Efficiency

A second measure of efficiency is informational efficiency, or how quickly a source reflects comprehensive information in the available trading prices. A price is efficient if the market has used all available information to set it, which implies that stocks always trade at their fair value (see Figure 11.12 ). If an investor does not receive the most current information, the prices are “stale”; therefore, they are at a trading disadvantage.

Forms of Market Efficiency

Financial economists have devised three forms of market efficiency from an information perspective: weak form, semi-strong form, and strong form. These three forms constitute the efficient market hypothesis. Believers in these three forms of efficient markets maintain, in varying degrees, that it is pointless to search for undervalued stocks, sell stocks at inflated prices, or predict market trends.

In weak form efficient markets, current prices reflect the stock’s price history and trading volume. It is useless to chart historical stock prices to predict future stock prices such that you can identify mispriced stocks and routinely outperform the market. In other words, technical analysis cannot beat the market. The market itself is the best technical analyst out there.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/principles-finance/pages/1-why-it-matters
  • Authors: Julie Dahlquist, Rainford Knight
  • Publisher/website: OpenStax
  • Book title: Principles of Finance
  • Publication date: Mar 24, 2022
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/principles-finance/pages/1-why-it-matters
  • Section URL: https://openstax.org/books/principles-finance/pages/11-5-efficient-markets

© Jan 8, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Finance Contrôle Stratégie

Home Numéros 22-4 Efficient market hypothesis: an e...

Efficient market hypothesis: an experimental study with uncertainty and asymmetric information

The efficient market hypothesis has been the subject of a wide debate over the past decades. This paper investigates the market efficiency by using laboratory experiments. We ran three experimental treatments with two distinguishing dimensions: uncertainty and asymmetric information. Results show that both uncertainty and information asymmetry affect the level of market efficiency with information asymmetry having a pronounced impact . Market efficiency is reduced when the fundamental value of stocks is volatile. In addition, we find that participants under-react to information and that this under-reaction is not corrected during trading periods and prices remain stable.

Index terms

Keywords: , codes jel: .

The authors gratefully acknowledge the comments of the two anonymous reviewers and the co-editor, Jean-François Gajewski.

1. Introduction

  • 1 Other definitions of market efficiency have been formulated. For example, Jensen (1978) argues that (...)

1 In efficient markets, competition between investors is so aggressive that prices instantly adjust to new information. At all times, a financial instrument trades at a price determined by its return and its risk. Fama (1965) stresses that in an efficient market the actual trading price of a stock will be a good estimation of its fundamental value. The concept of efficiency is based on the arguments put forward by Samuelson (1965), who states that the price of a financial asset fluctuates in a random way: future information is unpredictable and the changing price of each financial asset follows a random pattern. According to Fama (1970), a financial market is efficient if prices always fully reflect available information. If an asset’s price fully expresses all events occurring up to date t , then only new information can change it 1 . Three forms of efficiency have been defined by distinguishing between different categories of information (Fama, 1970, 1991). The semi-strong informational efficiency hypothesis suggests that current prices fully incorporate all publicly available information. This information comprises not just the history of past prices, but also companies’ financial publications and studies conducted by financial analysts. This hypothesis can be empirically tested by studying the reaction of prices to announcements made by companies: the price of a stock should react immediately and appropriately to any relevant disclosure of information. Empirical studies use the event study methodology (see in particular Brown and Warner, 1985). However, they do not really lead to any homogenous conclusions regarding the semi-strong efficiency (see for example Bernard and Thomas, 1989; Ikenberry and Ramnath, 2002; Spyrou et al. , 2007; Truong, 2011; Hsu et al. , 2013; Huynh and Smith, 2017).

2 By taking into account the mixed results on previous literature, we investigate the semi‑strong form of market efficiency by means of laboratory experiments. The purpose here is to test whether uncertainty and information asymmetry impact market efficiency. To alleviate the difficulties linked to event studies (particularly the wide variability of results depending on the method used to calculate expected and/or abnormal returns, the choice and length of calculation period used, etc.), the experimental method seems useful because it can control the information held by the subjects and observe their behavior. The protocol used in this paper represents actual trading platforms and includes three experimental treatments which differ according to the information level held by the subjects. In the first two treatments (noted T1 and T2), the information is symmetrical between the subjects. The difference between T1 and T2 is the level of uncertainty: T2 is characterized by uncertainty about future dividends whereas in T1, dividends are known by the subjects. In the third treatment (noted T3), there is information asymmetry between the subjects. In all three treatments, trading in stocks is fluid since the stocks are liquid and trade without transaction costs. Experiment participants are continuously informed of the fundamental value of the traded stock: this is a crucial piece of information which allows us to compare the prices established on the market and the fundamental value without making any auxiliary hypotheses. We find that trading is not efficient but also that both uncertainty and information asymmetry affect the level of market efficiency. Stock prices show greater deviation from fundamental value when information is asymmetric which in consistent with the existent literature. Ultimately, participants under-react to information announcements. This under-reaction, which is more pronounced in markets with information asymmetry between subjects, is not corrected, and prices remain stable to the end of the trading periods. Furthermore, the results show that the difference between price and fundamental value is particularly strong when the latter varies significantly. Thus, the more volatile the fundamental value, the more the informational efficiency is reduced.

3 This paper contributes to the literature. While most experimental studies focus on prices formation in the presence of symmetric and asymmetric information (see for example Kirchler, 2009; Kelly and Ljungqvist, 2012), it is the first time, to the best of our knowledge, that the semi-strong efficiency hypothesis under uncertainty is tested thought an experimental study. We investigate the difference of the information dissemination between markets with uncertainty and asymmetric information structure. This implementation of uncertainty and asymmetric information seems very realistic: on real financial markets, the fundamental information is distributed along a continuum across investors. In addition, this paper provides useful insights for financial market participants such as individual or institutional investors. Real financial markets are characterized by uncertainty and information asymmetry between investors. The gap between stock prices and their fundamental values is especially wide when information asymmetry is high. This difference can be exploited by investors to build arbitrage strategies that enable securities prices to adjust more quickly to the fundamental value. Managers are also encouraged to reduce the information asymmetry which in turn implies better information efficiency.

4 The remainder of this paper is structured as follows. Section 2 presents the literature review. Section 3 outlines the experimental design. Section 4 reports the results and Section 5 concludes the research.

2. Literature review

5 The semi-strong informational efficiency hypothesis has traditionally been tested using the event study methodology. This involves measuring abnormal return as the difference between the observed return of a stock and its theoretical return (also referred to as expected or normal return). According to the semi-strong efficiency hypothesis, prices adjust rapidly to any new information and abnormal or excess returns should be observed as soon as the information is made public (with no time lag). Most studies covering short event windows (a few days around the announcement date) illustrate instant price adjustment to public information (Anderson et al. , 2001; Dasilas and Leventis, 2011). A large part of literature has used this methodology to investigate whether dividend contains information. For example, Aharony and Swary (1980) find a small but significant dividend announcement effect (average excess return of about 1% over the 2-days announcement period). Asquith and Mullins (1983) find large and significant positive excess returns on the announcement day of dividend initiations but no reaction thereafter. By studying the abnormal returns following dividend initiations and omissions, Michaely et al. (1995) show that short reactions to omissions are greater than for initiations (-7% versus +3.4% over a 3-days window). In the French market, investors react favorably to dividend increase announcements: share prices increase by an average of 2.95% by the close of the fifth day after the announcement (Bouattour, 2007).

  • 2 Studies in financial behavior show that investors are subject to cognitive biases that influence th (...)

6 Other studies using a longer event window show that the reaction to new information continues during the months following the announcement (Bernard and Thomas, 1989; Ikenberry and Ramnath, 2002; Truong, 2011). Ball and Brown (1968) were the first to highlight abnormal returns throughout the 60 days following earnings announcements, proving the persistence of the announcement effect. Bernard and Thomas (1989) rank shares according to the degree of surprise in earnings announcements. They show that, over the six months that follow the announcement dates, a portfolio with good news records an average return higher than that of a portfolio with bad news. These results illustrate that information is underestimated at the time of the announcement, then gradually integrated into prices. According to Bernard and Thomas (1989), the tendency in stock prices following the event is consistent with an under-reaction to information, which is gradually corrected after the event. Truong (2011) analyzes abnormal returns over event windows running from two days to as many as 60, 120 or 250 days after the earnings announcements. The results show the existence of a trend following announcements. Over a one-year period, the strategy of holding stocks that have announced improved earnings and selling stocks that showed a decline in earnings generates a gain in the order of 9%. The calculation of abnormal returns over event windows of two and three years shows that cumulative abnormal returns remain constant. This result shows that the duration of the post-earnings announcements drift is 12 months. Michaely et al. (1995) also highlight a post-announcement drift: in the 12 months after the dividend announcement, there are positive excess returns for firms initiating dividends (7,5%) and negative excess returns for firms omitting dividends (-11%). Abnormal returns have been demonstrated for other events such as initial public offerings (Ritter 1991; Loughran and Ritter 1995; Zheng, 2007), share repurchases (Ikenberry et al. , 1995; Cheng et al. , 2015) and stock splits (Ikenberry and Ramnath, 2002; Boehme and Danielsen, 2007). While Ball and Brown (1968), Bernard and Thomas (1989), Ikenberry and Ramnath (2002), Truong (2011) and Huynh and Smith (2017) show the existence of an under-reaction to information, other authors (for example Mai, 1995; Clements et al. , 2009; Alwathainani, 2012; Tai, 2014; Kudryavtsev , 2018) highlight an excessive price reaction to information. This initial over- reaction is followed by a correction process which reflects the adjustment of stock prices to the stock’s fundamental value. 2

  • 3 In his article published in 1998, Fama suggests, “ Market efficiency must be tested jointly with a m (...)

7 However according to Fama (1991, 1998), the joint hypothesis problem 3 can skew results and efficiency is not directly testable since the use of a price-formation model is necessary. For some cases, several studies prove inefficiencies without resorting to price-formation model. This is the case of twin shares, corporate spinoffs and dual share classes (for a review of these anomalies, see Rosenthal and Young, 1990; Lamont and Thaler, 2003; Maymin, 2011). These inefficiencies are due mainly to the limits to arbitrage (Shleifer and Vishny, 1997) which is explained by fundamental risk, noise trading risk, transaction costs and short selling constraints. However, these analyses do not make it possible to study the market efficiency as a whole, but only particular cases.

  • 4 For a summary of the experimental method, see in particular Plott (1991), Davis and Holt (1993), Ca (...)

8 The experimental method can overcome these difficulties 4 . Laboratory (or lab) experiments involve creating a controlled “sterile” environment and isolating the effect of certain variables on the phenomenon. In contrast, in a natural experiment, the researcher “ simply observes naturally occurring, controlled comparisons of one or more treatments with a baseline ” (Harrison and List, 2004, p. 1041). Lab experiment employs a standard or non-standard subject pool (convenience sample) while in a natural experiment, the subjects naturally undertake tasks and are not informed on their participation. Experiments in laboratory are also more controlled than in nature: the experimenter can control the number of participants, the conditions of environment (for example uncertainty and asymmetric information regarding the market structure), and the information held by participants. Moreover, they allow to measure variables that are difficult to quantify (for example the fundamental value of the stock for Kirchler, 2009). According to Levitt and List (2007), a critical assumption underlying the interpretation of data from lab experiments is the generalization of results, i.e. their extrapolation beyond the laboratory. However, even if this question is crucial because humans are the subjects, environments constructed in the lab are conforming to real-world and “those aspects of economic behavior under study are perfectly general ” (Harrison and List, 2004, p. 1009). Finally, Harrison and List (2004, p. 1009) note that lab experiments permit “ sharper and more convincing inference”.

9 With these main advantages, lab experiments are used to test the informational efficiency hypothesis. For example, Theissen (2000) studies informational efficiency in three different market structures: an auction call market, a continuous market, and a dealer market. He shows that prices reflect the available information on continuous and auction call markets. This finding is consistent with Friedman (1993) who finds that informational efficiency in dealer markets is weaker than in continuous markets. Docherty and Easton (2012) test the market efficiency hypothesis and observe that prices underreact to good and bad news and that they display significant short term momentum. Other experiments have been developed by taking into consideration a constant or a variable fundamental value. Smith et al. (1988) employ a model in which the fundamental value is decreasing over time. They observe the development of a bubble, characterized by a phase of growth in prices and followed by a crash at the end of the experiment. Their study shows that speculation prevents prices from revealing the fundamental value of the traded asset. Lei et al. (2001) modify the standard framework of Smith et al. (1988) by forbidding speculation. They demonstrate that speculative bubbles are due to the presence of irrational behavior, in line with the “active participation hypothesis”, since experimenters often encourage subjects to actively participate, and this represents a potential source of errors. In order to reduce speculative bubbles, Smith et al. (2000) show the importance of a single dividend payment. Lei and Vesely (2009) consider that the market experience in subjects is not necessary to eliminate bubbles in the type of market studied by Smith et al. (1988), and introduce a pre-market phase designed to draw the subjects’ attention to the structure of the dividends they can collect: this suffices to ensure the efficiency of the market.

10 In the case of a constant fundamental value, Noussair et al. (2001) observe that bubbles diminish in markets. Hommes et al. (2005) present a different model, again with a constant fundamental value, but in which the subjects submit risky-asset price forecasts for the subsequent period. They find that prices fluctuate slightly around, or slowly converge towards, fundamental value. For Kirchler (2009), the study of investor behavior under growing, and above all fluctuating fundamental value, is essential since it reflects the reality of the financial markets. He focuses on price movements in response to fluctuating fundamental values following a stochastic process, and observes an under-reaction by participants to changes in the fundamental value. This under-reaction is more pronounced when there is information asymmetry between subjects.

11 Surprisingly, while uncertainty appears an important factor to explain investors’ behavior, in our knowledge, none experimental study uses uncertainty to test the semi-strong efficiency hypothesis. In his survey aiming to better understand investor psychology, Hirshleifer (2001) indicates that high uncertainty increases the tendency for psychological biases that affect investors’ behavior. The uncertainty increases the overconfidence of investors (Daniel et al. , 1998, 2001; Zhang, 2006). According to Daniel et al. (1998, 2001), the return predictability should be stronger in firms with greater uncertainty because investors tend to be more overconfident when firms’ businesses are hard to value. This argument implies that greater uncertainty is related to relatively higher (lower) stock returns following good (bad) news. Finally, Jiang et al. (2005), Zhang (2006) and Francis et al. (2007) document that the higher information uncertainty, the greater the under-reaction.

12 The present paper tests the hypothesis of semi-strong efficiency in financial markets by using laboratory experiments in which the variable fundamental value is clearly known by the subjects. The reaction to this information is directly studied by observing whether and how stock prices adjust to the fundamental value. While most experimental studies focus on prices formation in the presence of symmetric and asymmetric information, we enrich literature by taking in consideration not only the asymmetry of information between the subjects but also the level of uncertainty about future dividends .

3. Experimental design

3.1. treatments.

13 We consider three treatments that differ according to the nature of the information provided to the subjects: this enables us to study the semi-strong informational efficiency hypothesis in three different controlled environments. In the first two treatments (T1 and T2), all subjects receive the same information but the level of uncertainty about future dividends differs between T1 and T2. The third treatment (T3) is characterized by information asymmetry between the subjects. Each treatment includes six experimental sessions, each one composed of 24 periods of 100 seconds. This periodicity is used by Kirchler and Huber (2009), Kirchler (2009) and Hanke et al. (2010). At the beginning of each session in all three treatments, each subject is endowed with 50 shares and 1000 experimental units (EU) in cash. The subjects were briefed using written instructions at the beginning of each session. Four trial periods followed to allow them to familiarize themselves with the trading screen.

  • 5 In experiments conducted at the University of Innsbruck (Austria), Kirchler and Huber (2007) select (...)

14 Interaction between the subjects is based on the dividends disclosed at the beginning of each period. The wealth of each subject is a function of the number of stock and cash holdings. Wealth is also a function of market price, and evolves with each transaction. It always changes, even if the subject took no action in the last transaction. At the beginning of each period, the dividend is updated. When a subject sells some of their shares, their cash holding increases in real time, receiving an interest at the end of each period at a risk-free rate of 3%. The risk-adjusted interest rate is set at 10% and remains constant until the end of the experiment 5 . It is used to calculate the present value of the shares by discounting the future dividends known to the subjects. We inform the subjects of the risk-adjusted interest rate ( r e ). The starting wealth is the same for all participants, and all shares are assumed to have been bought at the same price at the beginning of the experiment. This price is equal to the present value for the first period of the game. This value reflects the quality of an efficient market and therefore our starting point corresponds to an efficient market: if we assume that all subjects buy their shares at time t = 0 seconds and that the market is efficient at this time, all participants will have bought the share at the market price, which is assumed to be equal to its present value.

3.1.1. Treatment T1: disclosure of four dividends

15 The subjects trade shares on the basis of changes in future dividends. At the beginning of each period, each subject knows the dividend for the current period and coming dividends for the next three periods (Kirchler and Huber, 2009). Subjects are therefore presumed to be well-informed and know the precise values of future dividends (Kirchler and Huber, 2007). For each experimental session, the number of subjects varies between 10 and 14.

16 For the six experimental sessions of treatment T1, six series of dividends are generated. They have a bullish then bearish nature (or vice versa) in order to place the subjects in gain and loss situations. Other series are presented to the subjects in graphic form before the beginning of each experiment, in order to show them the randomness of dividends. The dividend process is a “random walk”, determined as follows:

Image 10000000000002B5000000C6C6B484EACBB42CE8.jpg

17 D t is the dividend for period t.ε t is a normally-distributed random variable with an expected value of zero and a variance of 0.16. The dividend for the first period is equal to 2 EU per share held.

18 The present value ( PV ) of the stock is calculated by applying the dividend discount model and assuming that the last dividend is perpetual and constant (see for example Huber et al. , 2008; Kirchler and Huber, 2009). To calculate the present value for period t , we discount the dividends for period t and those for periods t+1 , t+2 and t+3 at the rate of 10%.

Image 10000000000002AC00000070B08290C0DF44F8B2.jpg

19 PV t and D t represent respectively the present value and the dividend for the current period t. r e is the risk-adjusted interest rate equal to 10%.

3.1.2. Treatment T2: disclosure of a single dividend

20 This second treatment is characterized by uncertainty about future dividends. In this treatment, the subjects are informed of the dividend for the current period and the present value of the share. The only difference between T1 and T2 is the number of dividends disclosed to the subjects. In T1, the subjects are informed of the dividends of the current period and those of the next three periods. However, in T2, the subjects are only informed of the dividend of the current period. This implies that the subjects in T1 have more visibility on the future dividends and negotiate the share based on dividends of the current period and also those of the next three periods. However, the subjects of the T2 treatment are in situation of uncertainty about future dividends and trade on the basis of the current dividend only.

21 As before, 10 to 14 subjects participate in each T2 experimental session. To enable comparison with the first treatment, we used the same present value series as calculated in T1. We therefore calculated the series of dividends D t by multiplying the series PV t by 0.1. However, in this case the dividend for the first period is no longer 2 EU. We inform the subjects that the dividend for the first period is around 2 EU and will change randomly. The present value is calculated as follows ( r e is the risk-adjusted interest rate equal to 10%):

Image 10000000000002B80000005773FF49A49753E89D.jpg

3.1.3. Treatment T3: information asymmetry between the participants

  • 6 For this third treatment, we are more restrictive on the number of subjects in each session. In ens (...)

22 This third treatment is characterized by information asymmetry between the subjects. At the beginning of each experimental session, participants are randomly allocated to one of the four information levels (I1 to I4) and this allocation remains unchanged throughout the entire session. 12 subjects participate in each experimental session, three for each information level 6 . The best-informed subjects, belonging to information level I4, know the dividend for the current period and the next three periods. Subjects in information level I3 know the dividend for the current period and the next two periods, etc. The least-informed subjects (information level I1) know only the dividend for the current period (see Figure 1).

Figure 1. Overview of traders’ knowledge about future dividends in T3

Image 100000000000021B000000B03057124267160794.jpg

23 The conditional present value ( PV ) is a function of the dividends known by the subjects. It is calculated using the dividend discount model, in which the last dividend is presumed to be perpetual and constant:

Image 10000000000002B00000005722C188AD127AC41D.jpg

  • 7 We assume that the risk-adjusted interest rate remains unchanged for the 3 treatments and for the 4 (...)

24 PV j , t and D t represent respectively the conditional present value of the information level j (from 1 to 4) and the dividend for the current period t. r e is the risk-adjusted interest rate 7 of 10%. To calculate the conditional present value for period t , we discount the dividends known to each subject at the rate of 10%.

25 For this third treatment, the value of the shares for all subjects at the beginning of the session is equal to the conditional present value for the best-informed subjects (information level I4). We explain to the subjects that they all bought the share at the same price, without however informing them that this price represents the conditional present value for the best informed subjects. The starting wealth is thus the same for all subjects, regardless of their information level.

3.2. Market architecture

26 The market in which the subjects operate can be considered as a simplified but representative of most real stock exchanges. This is an electronic market which provides direct confrontation between buying and selling offers. The trading mechanism is implemented as a continuous double auction with an open order book. The subjects can place their orders at any time during a given period. Each participant can trade shares with other participants by submitting limit orders (specifying price and quantity) or accepting orders at market price (specifying only the quantity to trade). Like Brandouy and Barneto (1999) and Kirchler (2009), we do not include a pre-opening market phase. The order book is empty at the start of each trading period. All bids and asks are recorded in the order book. A bid (or ask) is only valid if the proposed price is higher (or lower) than the best offer on the market. The offers are then publicly communicated to all the participants. The best offer can be accepted by another participant at any time. The partial execution of limit orders is available. In such cases, an exchange is concluded at the price offered for the desired quantity. Trading takes place without transaction costs. In order to guarantee liquidity, the prices offered are restricted to a maximum of one digit after the decimal point. Finally, stocks and cash holdings are carried over from one period to the next . The participants receive information in real time about dividends, the ( conditional) present value, their wealth, their shares and cash holdings, and a chronological list of transaction prices.

3.3. Experimental implementation

27 Most experimental studies are conducted with students as subjects (Kirchler, 2009; Hanke et al. , 2010) because they have plenty of time and are motivated by relatively modest monetary gains, leading to a low cost for the experimenter. Porter and Smith (2003) have shown that the behavior of students in experimental stock markets is very similar to that of real-life professional investors. We conducted our 18 experimental sessions (3 treatments x 6 sessions) in a computer laboratory with a total of 213 students. All of the students were taking courses in finance. Each student participated in only one experimental session and, when asked, confirmed whether they had understood the instructions. Each session lasted about 90 minutes. While each experimental session is composed of 24 periods, the instructions given to subjects state that the game involves 20 to 30 periods, and that the end of the game is random with equal probability for each period. The aim is to prevent strategic behavior by participants at the end of the experiment (Kirchler and Huber, 2009; Hanke et al. , 2010). The experiments were programmed and performed using z-Tree software (Fischbacher, 2007).

28 In order to engage the students and motivate them to make good decisions, a voucher-based tournament incentive structure is used. The value of the voucher awarded is between 0 and 30 euros, depending on each subject’s trading performance compared to the other subjects. In T3, incentives for trading are benchmarked by the performance of the other participants in the same information level. The vouchers are awarded at the end of each experimental session. Each subject’s gains at the end of the session are calculated in Experimental Units and are equal to the sum of the gains achieved over all 24 periods of the session. For a given period, the gain is equal to the variation in the wealth as measured at the end of the period.

29 To sum up, Table 1 summarizes the experimental differences between the three treatments.

Table 1. Differences between the three treatments

4.1. Descriptive statistics

30 In the two treatments T1 and T2, the fundamental value ( FV ) is equal to the present value of the stock. In treatment T3, the FV is equal to the conditional present value for the best-informed subjects, i.e. those who know the dividends for the current period and the next three periods. This makes it possible to compare the prices established on the market with the fundamental value of the share. For each experimental session, we calculated two measures of efficiency proposed by Theissen (2000) for the opening price ( O ), the average transaction price weighted by the transaction volumes for each period ( P ) and the closing price ( C ). The first measurement, MAE , represents the mean absolute error between the transaction price j and the fundamental value.

Image 10000000000002BA00000060AAD862EE50B2575E.jpg

31 where P t,j is the transaction price; FV t is the fundamental value; t is the trading period ranging from 1 to 24 for each experimental session. j indicates whether the transaction price considered is the opening price ( O ), the volume weighted average transaction price (P) or the closing price ( C ).

32 The second measurement, MRE , is the mean relative error. The absolute deviation between the price and the fundamental value is divided by the share value, and then an average for all the periods of each session is calculated. This measurement makes comparisons possible between experiments with different fundamental values.

Image 10000000000002AA0000006488A5549F321FC4EB.jpg

  • 8 Similarly, Hanke et al. (2010) and Kirchler et al. (2011) test market efficiency by using the absol (...)

33 If the market is efficient throughout all trading periods of each experimental session, then the opening prices, average transaction prices and closing prices should adjust to the fundamental value. These two measurements are built on an absolute value since we test the semi-strong informational efficiency hypothesis 8 . They should therefore tend towards 0.

34 Table 2 presents MAE and MRE for each experimental session and their average for each of the three treatments.

Table 2. Measurements of informational efficiency - descriptive statistics

Image 10000000000001DD000001C319EF749CBB801A9F.jpg

35 The mean of the MAE for average transaction prices is 1.70, 1.64, and 3.68 EU respectively for the three treatments. The measurement MRE, which expresses the price deviation as a percentage of fundamental value, is better suited to comparison between experimental sessions when the movement of fundamental values differs. For deviation calculated on the basis of average transaction prices, this measurement comes to 9.56%, 8.72% and 19.38% respectively for the three treatments. This shows that informational efficiency is reduced to a greater extent where there is information asymmetry.

  • 9 In the study by Theissen (2000), an improvement in prices between opening and closing is found in t (...)

36 For treatment T1, in five sessions out of six, closing price-based measurements are lower than opening prices, reflecting the convergence of prices towards the fundamental value 9 . The only exception is session T1-M4. In this session, the opening price MRE measurement is 12.39% and increases to 14.67% by the end of the trading periods, proving that the opening prices are closer than the closing prices to the fundamental values. The improvement in informational efficiency during trading periods seems weak for the first treatment. Observing the mean MRE measurement over the six experimental sessions, we note that it fell by 1.02% (9.83% – 8.81%) between the beginning and end of the trading periods. For the sixth T1 session, the MRE measurement was 20.16% and 15.90% respectively for the opening and closing prices, which demonstrates an improvement of 4.26% in informational efficiency. The MAE measurement for average transaction prices shows that the absolute deviation between price and fundamental value is 1.70 EU on average. This measurement takes its highest value in the M6 market (2.67 EU). Over the six experimental sessions of the first treatment, the improvement in informational efficiency during the trading periods was an average 0.20 EU (|1.60 – 1.80|).

37 For treatment T2, informational efficiency appears to deteriorate and the closing prices deviate to a greater extent from the fundamental value than in T1. The mean MRE measurements for the six sessions of this second treatment show that informational efficiency declined by 1.39% (9.46% – 8.07%) between the beginning and end of the trading periods. The MAE measurement for average transaction prices (P) shows that the absolute deviation is equal to 1.64 EU, and the decline in informational efficiency between the beginning and the end of the periods is an average 0.25 EU (1.79 – 1.54). However, the MRE measurement shows that in the two sessions M1 and M6, the closing prices, compared to the opening prices, adjusted slightly to the fundamental value. In the four other sessions, the opening prices reflect increased informational efficiency. This result suggests that, depending on the information received at the beginning of each period, subjects react strongly (impulsive behavior) by selling or buying stocks, but then revise their beliefs and subsequently their initial trading strategy, which increases the deviation between price and fundamental value at the period-end. In this second treatment, the subjects only receive information related to the current period. Following an increase in the dividend for the current period, they buy stocks at prices close to the fundamental value, but the uncertainty related to future periods encourages them to revise the trading prices offered. Likewise, when the dividend for the current period falls, they sell some of the shares in their possession in order to collect interest on their cash holdings, but adjust the trading prices at the end of the period.

38 For treatment T3, the MAE measurement for average transaction prices shows that the absolute deviation between price and fundamental value is an average 3.68 EU. This measurement reaches its highest value for the M6 market (4.99 EU). Over the six T3 experimental sessions, the improvement in informational efficiency during the trading periods is an average 0.23 EU (|3.61 – 3.84|), which seems consistent with the improvement in informational efficiency under T1 (0.20 EU). Observing the mean MRE measurement for the six experimental sessions, we note a decline of 0.75% between the beginning and end of the trading periods (19.89% – 19.14%). This measurement is 19.38% for average transaction prices (P), and is significant compared to T1 (9.56%) and T2 (8.72%). Thus, in the presence of information asymmetry, the deviation between price and fundamental value is relatively high, and the heterogeneity of information between subjects leads to weaker informational efficiency.

39 In the three treatments, we note that for average transaction prices, the highest MRE value corresponds to the M6 market. It is 19.40% for T1 (for which the average MRE is 9.56%), 13.35% for T2 (average MRE 8.72%) and 35.37% for T3 (average MRE 19.38%). The M6 market thus reflects weak informational efficiency compared to the other markets. Conversely, the lowest MRE measurement relates to the M3 market, in all 3 treatments. It is 4.32% in T1, 5.53% in T2 and 11.67% in T3. These three experimental sessions thus reflect the best informational efficiency. Despite the separate nature of the experimental sessions (the information on dividends and subjects are not the same in the three treatments), the T1-M3, T2-M3 and T3-M3 markets have the lowest MRE measurement, while the T1-M6, T2-M6 and T3-M6 markets show the highest MRE measurement. A possible explanation for this similarity may relate to the fact that for a given session, the series of fundamental values is kept constant across the 3 treatments. To verify this, the relationship between MRE ( P ) and the standard deviation of the relative variation in fundamental value ( σ(DFV)) of each experimental market is studied. The variable DFV is calculated as follows:

Image 10000000000002AE00000058DDCE789B6F9D2A21.jpg

40 FVt being the fundamental value and t being the trading period, running from 2 to 24.

41 Figure 2 illustrates the MRE ( P ) measurement for each of the 18 experimental markets according to the standard deviation of the relative variation in fundamental value.

Image 1000000000000221000001F3AC2C7DF757A6E039.jpg

42 We observe that the adjustment in prices ( MRE ) is a function of the relative variation in the fundamental value. The higher the variation in fundamental value, the less prices adjust efficiently to fundamental value. The M3 markets for each of the three treatments show the weakest MRE and ( σ(DFV)) values, while the M6 markets for each treatment have the strongest measurements. The other experimental sessions have intermediate values. This implies that the higher the relative variation in fundamental value, the more prices deviate from the fundamental value of the share. The positive MRE - ( σ(DFV)) relationship can be interpreted in two ways. The first interpretation is that when the variation in fundamental value is significant, participants react excessively (over-reaction) to the information contained in the dividend announcements. The second interpretation is compatible with the hypothesis of under-reaction to information: when there is a large variation in fundamental value, the subjects do not submit sufficient purchase and sale orders to allow prices to reach the fundamental value.

4.2. Informational efficiency and trading periods

43 In order to see whether stock prices align with fundamental value during trading periods, we performed a test on paired series, involving comparison of MRE(O) and MRE(C) for the 144 observations (24 periods x 6 sessions) in each treatment. The MRE measurement is used since it expresses the price deviation as a percentage of fundamental value, and is therefore more relevant in the case of pooled data. If there is price improvement during trading periods, then the differences between MRE(O) and MRE(C) should be positive and significant. In this case, prices adjust to fundamental value during trading periods. Since the results of skewness, kurtosis and global normality tests indicate that the two series MRE(O) and MRE(C) for each of the three treatments are not normally distributed, a non-parametric Wilcoxon test is performed.

Table 3. Comparison of the adjustment of opening and closing prices to fundamental value

Ti : Treatment ( i from 1 to 3); O : opening price; C : closing price; MRE(O) : mean relative error between opening price and fundamental value; MRE(C) : mean relative error between closing price and fundamental value; Wilcoxon Z is the statistic indicating whether the matched-pairs MRE(O) and MRE(C) are different, and P is the associated probability.

  • 10 Theissen (2000) shows that correction of price errors is non-significant in the fixing market (nega (...)

44 The results show a difference between the three treatments. For T1 and T3, the change in price errors is positive, which implies that the closing prices are closer than opening prices to the fundamental value. This convergence of prices towards the fundamental value is measured as a mean 1.02% and 0.75% respectively for T1 and T3. However, the Wilcoxon median test shows that the improvement in informational efficiency is non-significant (P= 0.159 and 0.125 > 0.05). For T2, the MRE measurement is lower for opening prices than closing prices: the opening prices are therefore closer than the closing prices to the fundamental value. The difference is a mean -1.39%, but non-significant according to the Wilcoxon test. Thus, no statistically significant difference is observed between the MRE(O) and MRE(C) pairs. This result is consistent with Theissen (2000) who shows a non-significant price error reduction in experimental double-auction markets 10 .

4.3. Price stability during trading periods

45 We investigate the equality of opening prices ( O ), average transaction prices (P) and closing prices ( C ). For the three treatments, a test on paired series is performed using the three following couples: (O – P), (C – P) and (O – C) . Each series consists of 144 observations. Studying the distribution of the series allows us to conclude that they follow a normal pattern. A Student test on paired series is applied in this case.

Image 10000000000002980000017B03D40B2308A3D477.jpg

46 For all treatments, the mean difference in each pair of observations is close to 0 for the three pairs of series. The bilateral probability of the Student test is higher than 5% for all combinations. The mean difference between the pairs of observations is therefore null, and there is no significant difference between the three prices studied. The results of this test, combined with those of the non-parametric test performed on the MRE(O) and MRE(C) series, show that prices remain stable until the end of the trading period. No conclusion can be drawn from these results as to the nature of the participants’ reaction to information: they may under- or over-react to the disclosure of dividends but prices remain stable during the trading periods. This stability of prices can be explained by the mimetic behavior of the subjects. When a new dividend is disclosed at the beginning of each period, the subjects submit buying and selling orders in the open order book. The supply and demand of the share implies an opening price. The subjects have access to all the information on the trading screen, and more specifically to orders and transaction prices. Although each subject has his own strategy of negotiation, they tend to follow the other participants. This mimetic behavior leads to price stability during the trading period.

4.4. Adjustment of prices to the fundamental value

47 So far, we have shown that the prices established in experimental markets do not reflect all the available information, and that a deviation persists between market price and fundamental value. Figure 3 provides an illustration of the relationship between the average prices (P) and fundamental values (FV) within the 18 experimental markets. We use the average transaction price (P) since prices remain stable during trading periods. Each graph represents markets characterized by a change in FV and average prices corresponding to treatments T1, T2 and T3.

Figure 3. Comparison of fundamental values ( FV ) and average market prices for the three treatments over the trading periods

Image 100000000000012E0000028D628B0221061E40BC.jpg

48 We can clearly see that prices do not adjust to the fundamental value. The degree of mispricing increases when the variation in fundamental value is important. This is visible in all markets and especially in the M6 markets, which are characterized by high volatility in the fundamental value (see Figure 2) . Mispricing between prices and fundamental values remained even during the final periods of the experimental sessions. This confirms that the learning effect was low and did not have an impact on the subjects’ trading strategies. Additionally, stocks were undervalued in bullish markets and overvalued in bearish markets, which suggests under-reaction in all experimental markets. This under-reaction is more pronounced in T3 (more pronounced undervaluation in bullish markets and more pronounced overvaluation in bearish markets). These undervaluation and overvaluation respectively in bullish and bearish markets are similar to those obtained by Kirchler (2009).

49 To determine the degree of the reaction to the information disclosed, we use the model defined by Theissen (2000) and estimate the following regression for each session of the three treatments:

Image 10000000000002A700000055E8FE9B5BB925C742.jpg

50 (P) = average transaction price; FV = fundamental value , t = current period ( t-1 = previous period).

  • 11 Time series regressions are privileged here since they enable us to compare different sessions of e (...)

51 According to Theissen (2000) and Kirchler (2009), if the market is efficient, then the value of β should not be different to 1. Conversely, if the subjects under-react (over-react) to information, we would expect a β coefficient lower (higher) than 1. The γ coefficient should be negative since previous price errors should be corrected during the current period. This implies that undervaluation in bullish markets and overvaluation in bearish markets during the previous period t-1 will be corrected during the current period t . The results of time series regressions 11 are presented in Table 5.

Table 5. Adjustment of prices to information

Image 1000000000000171000002865E5452A50876A4D0.jpg

52 Results show a similarity between T1 and T2, which are characterized by information symmetry between the participants. The β coefficients are significantly different from zero at the 1% level, and lower than one in 11 of the 12 experimental sessions. This confirms the existence of an under-reaction to information in these two treatments. The subjects react weakly to a variation in fundamental value. This weak adjustment of prices is consistent with Weber and Welfens (2009) and Kirchler (2009), which highlight an under-reaction to information. In session T2-M3, the variation in fundamental value is accompanied by a change in prices of 0.867. This is the only session for which H0: β=1 is accepted, implying that the market is efficient for this session. This result is explained by the low volatility in fundamental value: as discussed earlier in the descriptive analysis, this session presents the lowest ( σ(DFV)) value. Furthermore, the subjects in this session only have information for the current period, and this is reflected in a more significant adjustment of prices to fundamental value.

  • 12 Another explanation of this result is as follow: The dividend series are generated as a random walk (...)

53 Analysis of the results shows that under-reaction is more significant in the first treatment (β coefficient is 0.583 in T1 and 0.707 in T2). This result is not in line with the behavioral finance literature which stipulates a greater under-reaction for a high level of uncertainty (Jiang et al. , 2006; Francis et al. , 2007). This is explained by the fact that the subjects in T2, which is characterized by uncertainty, not having any information related to future periods, react more strongly to the arrival of new information (greater surprise effect). Indeed, during a given period, T2 subjects have no information on the dividend for the next period, and their reaction is reflected more in the prices. In contrast, in T1, the dividend for the current period has already been known since the three previous periods. Each pair of sessions belonging to the treatments contains the same change in fundamental value. This proves that in the first treatment, the subjects trade shares on the basis of the fundamental value and the dividend for the current period, but do not ignore the information they hold on future dividends 12 . The sign for the γ coefficient is negative, as expected. In the 12 experimental sessions in T1 and T2, 7 coefficients (4 in T1 and 3 in T2) are significant at the 5% level. This proves that previous evaluation errors are corrected during the current period.

54 Treatment T3 presents results that differ from the first two treatments. The β coefficients are positive but non-significant. All these coefficients are close to 0 and are lower than 1 at the 1% level in all 6 experimental sessions. This shows that the under-reaction is more pronounced in T3 characterized by information asymmetry between the subjects. This result is confirmed by the average β coefficient, which stands at 0.100 and is strictly lower than 0.583 for T1 and 0.707 for T2. All the γ coefficients for the 6 experimental sessions are significant. This proves that the under-reaction (highly pronounced in T3) of the previous periods is corrected during the current period.

5. Conclusion 

55 We investigate the semi-strong informational efficiency hypothesis by using experimental method. This method has the advantage of controlling the information held by subjects, and enables us subsequently to compare the resulting prices with the fundamental value. Following Huber et al. (2008) and Kirchler (2010), we use an experimental protocol with three treatments that are differentiated according to the nature of the information disclosed to the subjects. Consistent with Theissen (2000), we find that informational efficiency does not improve during trading periods. The deviation between price and fundamental value is high when the variation in the fundamental value is large. Informational efficiency is therefore weak when the fundamental value is volatile. The findings also show that prices remain stable during trading periods. The regression results and tests performed before ( MRE(O) - MRE(C) and price stability tests) show that when information is first released, evaluation errors (under-reaction) occur on the experimental markets that remain uncorrected until the end of the trading periods. In treatment T2, which is characterized by uncertainty about future dividends, subjects show impulsive behavior and react strongly to new information. This indicates that information is incorporated more into prices in comparison to treatment T1. It also appears that the subjects under‑react to information in all three treatments. The absence of any trend after the information announcement shows that prices do not converge towards the fundamental value of the share and the under-reaction is not corrected during the trading periods. This effect is more pronounced when the participants do not hold the same information on future dividends. In treatment T3 with information asymmetry, the information dissemination process is therefore much slower, and price adjustment to fundamental value is weak. This finding is consistent with Kirchler (2009), who demonstrates that under-reaction to information is more pronounced in the context of asymmetric information.

56 Results may interest several stock market actors. In the treatment with asymmetric information (T3), the deviation between stock prices and the fundamental value is more significant than in treatments with symmetric information (T1 and T2). This suggests that managers should disclose all relevant information about company to minimize the information asymmetry between investors, which ensures healthier informational efficiency. Additionally, the implication of high volatility in dividends (and, therefore, in fundamental value) is an under-reaction to information (from investors). Thus, managers should maintain a clear strategy of dividend distribution. If a gap between prices and fundamental values persists, it will be exploited by investors by building arbitrage strategies that ensure securities prices adjust to the fundamental value, which will contribute to better informational efficiency.

57 To complete our analysis, it would be useful to study the reasons for under-reaction to information. An initial line of research involves focusing on so-called ‘rational’ explanations related to flaws in financial markets, particularly the illiquidity of stocks and the impact of transaction costs on asset prices. A second research approach considers the limited rationality of investors, with the existence of cognitive biases. Anchoring bias and self-attribution bias could supply an explanation for under-reaction to information. The disposition effect, defined as the tendency for investors to sell winning stocks too soon and hold losing stocks too long, is another possible source of under‑reaction to information. It would thus be useful to explore the relationship between the disposition effect and under-reaction to information further, in particular through an experimental method.

Bibliography

Aharony J. and Swary I. (1980), « Quarterly Dividend and Earnings Announcements and Stockholders' Returns: An Empirical Analysis », Journal of Finance , vol. 35 , p. 1-12.

Alwathainani A.M. (2012), « Consistent Winners and Losers », International Review of Economics and Finance , vol.  21, p. 210-220.

Anderson H., Cahan S., and Rose L.C. (2001), « Stock Dividend in an Imputation Tax Environment », Journal of Business Finance and Accounting , vol.  28, p. 653-669.

Asquith P. and Mullins D.W. (1983), « The Impact of Initiating Dividend Payments on Shareholders' Wealth », Journal of business , vol.  56, n° 1, p. 77-96.

Ball R. and Brown P. (1968), « An Empirical Evaluation of Accounting Income Numbers », Journal of Accounting Research , vol.  6, p. 159-178.

Barberis N., Shleifer A. and Vishny R.W. (1998), « A Model of Investor Sentiment », Journal of Financial Economics , vol.  49, p. 307-343.

Bernard V. and Thomas J. (1989), « Post-Earnings-Announcement Drift: Delayed Price Response or Risk Premium? », Journal of Accounting Research , vol.  27, p. 1-36.

Boehme R.D. and Danielsen B.R. (2007), « Stock-Split Post-Announcement Returns: Underreaction or Market Friction? », Financial Review , vol.  42, p. 485-506.

Bouattour M. (2007), « The Information Content of Dividend Increase Announcements: Evidence from The French Stock Exchange », i-manager’s Journal on Management , vol.  2, p. 34-41.

Brandouy O. and Barneto P. (1999), « Incertitude et fourchettes de prix sur un marché d’enchères : les apports du laboratoire », Finance Contrôle Stratégie , vol. 2, p. 87-113.

Brown S. and Warner J. (1985), « Using Daily Stock Returns: The Case of Event Studies », Journal of financial economics , vol.  14, p. 3-31.

Cadsby C.B. and Maynes E. (1998), « Laboratory Experiments in Corporate and Investment Finance: A Survey », Managerial and Decision Economics , vol.  19, p. 277-298.

Cheng L.Y., Yan Z., Zhao Y. and Gao L.M. (2015), « Investor Inattention and Under-Reaction to Repurshase Announcements », Journal of Behavioral Finance , vol.  16, p. 267–277.

Clements A., Drew M.E., Reedman E.M. and Veeraraghavan M. (2009), « The Death of the Overreaction Anomaly? A Multifactor Explanation of the Contrarian Returns », Investment Management and Financial Innovations , vol.  6, p. 76-85.

Daniel K., Hirshleifer D. and Subrahmanyam A. (1998), « Investor Psychology and Security Market Under- and Overreactions », Journal of Finance , vol.  53, p. 1839-1886.

Daniel K., Hirshleifer D. and Subrahmanyam A. (2001), « Overconfidence, Arbitrage and Equilibrium Asset Pricing », Journal of Finance , vol.  56, p. 921-965.

Dasilas A. and Leventis S. (2011), « Stock Market Reaction to Dividend Announcements: Evidence from The Greek Stock Market », International Review of Economics and Finance , vol.  20, p. 302-311.

Davis D.D. and Holt C.A. (1993), «  Experimental Economics », Princeton University Press.

Docherty P. and Easton S. (2012), « Market Efficiency and Continuous Information Arrival: Evidence from Prediction Markets », Applied Economics , vol.  44, p. 2461-2471.

Fama E.F. (1965), « Random Walks in Stock Market Prices », Financial Analysts Journal , vol.  21, p. 55-59.

Fama E.F. (1970), « Efficient Capital Markets: A Review of Theory and Empirical Work », Journal of Finance , vol.  25, p. 383-417.

Fama E.F. (1991), « Efficient Capital Markets: II », Journal of Finance , vol.  46, p. 1575-1617.

Fama E.F. (1998), « Market Efficiency, Long-Term Returns and Behavioral Finance », Journal of Financial Economics , vol.  49, p. 283-306.

Fischbacher U. (2007), « Z-Tree: Zurich Toolbox for Readymade Economic Experiments », Experimental Economics , vol.  10, p. 171-178.

Francis J., Lafond R., Olsson P. and Schipper K. (2007), « Information Uncertainty and Post ‐ Earnings ‐ Announcement ‐ Drift », Journal of Business Finance & Accounting , vol.  34, n° 3 ‐ 4, p. 403-433.

Friedman D. (1993), « Privileged Traders and Asset Market Efficiency: A Laboratory Study », Journal of Financial and Quantitative Analysis , vol.  28, p. 515-534.

Grinblatt M. and Han B. (2005), « Prospect Theory, Mental Accounting and Momentum », Journal of Financial Economics , vol.  78, p. 311-339.

Hanke M., Huber J., Kirchler M. and Sutter M. (2010 ), « The Economic Consequences of a Tobin Tax - An Experimental Analysis », Journal of Economic Behavior and Organization , vol.  74, p. 58-71.

Harrison G.W. and List J.A. (2004 ), «  Field Experiments » , Journal of Economic literature , vol.  42, n° 4, p. 1009 - 1055.

Hirshleifer D. (2001 ), «  Investor Psychology and Asset Pricing » , Journal of Finance , vol.  56, p. 1533 - 1597.

Hommes C., Sonnemans J., Tuinstra J. and Van de Velden H. (2005), « Coordination of Expectations in Asset Pricing Experiments », Review of Financial Studies , vol.  18, p. 955-980.

Hsu C.H., Chiang Y.C. and Liao T.L. (2013), « Overreaction and Underreaction in the Commodity Futures Market », International Review of Accounting, Banking and Finance , vol.  5, p. 61-83.

Huber J., Kirchler M. and Sutter M. (2008 ), « Is More Information Always Better? Experimental Financial Markets with Cumulative Information », Journal of Economic Behavior and Organization , vol.  65, p. 86-104.

Huynh T.D. and D.R. (2017), «  Stock Price Reaction to News: The Joint Effect of Tone and Attention on Momentum », Journal of Behavioral Finance , vol.  18, p. 304-328.

Ikenberry D., Lakonishok J. and Vermaelen T. (1995), « Market Underreaction to Open Market Share Repurchases », Journal of Financial Economics , vol.  39, p. 181-208.

Ikenberry D.L. and Ramnath S. (2002), « Underreaction to Self-selected News Events: The Case of Stock Splits », Review of Financial Studies , vol.  15, p. 489-526.

Jensen M.C. (1978), « Some Anomalous Evidence Regarding Market Efficiency », Journal of Financial Economics , vol.  6, p. 95-101.

Jiang G., Lee C.M.C. and Zhang Y. (2005), « Information Uncertainty and Expected Returns », Review of Accounting Studies , vol.  10, n° 2-3, p. 185-221.

Kelly B. and Ljungqvist A. (2012), « Testing Asymmetric-Information Asset Pricing Models », Review of Financial Studies , vol.  25, p. 1366-1413.

Kirchler M. and Huber J. (2007), « Fat Tails and Volatility Clustering in Experimental Asset Markets », Journal of Economic Dynamics and Control , vol.  31, p. 1844-1874.

Kirchler M. and Huber J. (2009 ), « An Exploration of Commonly Observed Stylized Facts with Data from Experimental Asset Markets », Physica A: Statistical Mechanics and its Applications , vol.  388, p. 1631-1658.

Kirchler M. (2009), « Underreaction to Fundamental Information and Asymmetry in Mispricing Between Bullish and Bearish Markets. An Experimental Study », Journal of Economic Dynamics and Control , vol.  33, p. 491-506.

Kirchler M., Huber J. and Kleinlercher D. (2011 ), « Market Microstructure Matters when Imposing A Tobin Tax - Evidence from the Lab », Journal of Economic Behavior and Organization , vol.  80, p. 586-602.

Kudryavtsev A. (2018), « The Availability Heuristic and Reversals Following Large Stock Price Changes », Journal of Behavioral Finance , vol.  19, n° 2, p. 159-176.

Lamont O. and Thaler R.H. (2003), « Anomalies: The Law of One Price in Financial Markets », Journal of Economic Perspectives , vol.  17, n° 4, p. 191-202.

Lei V. and Vesely F. (2009), « Market Efficiency: Evidence from A No-Bubble Asset Market Experiment », Pacific Economic Review , vol.  14, p. 246-256.

Lei V., Noussair C. and Plott C. (2001), « Non Speculative Bubbles in Experimental Asset Markets: Lack of Common Knowledge of Rationality Vs. Actual Irrationality », Econometrica , vol.  69, p. 831-859.

Levitt S.D. and List J.A. (2007), « What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World? », Journal of Economic perspectives , vol.  21, n° 2, p. 153-174.

Loughran T. and Ritter J. (1995), « The New Issues Puzzle », Journal of Finance , vol.  50, p. 23-52.

Mai H.M. (1995), « Sur-réaction sur le Marché Français des Actions au Règlement Mensuel 1977–1990 », Finance , vol. 16, p. 113-136.

Malkiel B. (1992), « Efficient Market Hypothesis », New Palgrave Dictionary of Money and Finance , Macmillan.

Maymin P.Z. (2011), « Self-Imposed Limits of Arbitrage », Journal of Applied Finance , vol.  2, p. 88-105.

Michaely R., Thaler R.H. and Womack K.L. (1995), « Price Reactions to Dividend Initiations and Omissions: Overreaction or Drift? », Journal of Finance , vol.  50, p. 573-608.

Noussair C., Robin S. and Ruffieux B. (2001), « Price Bubbles in Laboratory Asset Markets with Constant Fundamental Values », Experimental Economics , vol.  4, p. 87-105.

Nuzzo S. and Morone A. (2017), « Asset Markets in The Lab: A Literature Review », J ournal of Behav ioral and Experimental Finance , vol.  13, p. 42-50.

Plott C. (1991), « Will Economics Become an Experimental Science? », Southern Economic Journal , vol.  57, p. 901-919.

Porter D.P. and Smith V.L. (2003), « Stock Market Bubbles in The Laboratory », Journal of Behavioral Finance , vol.  4, p. 7-20.

Ritter J. (1991), « The Long-Run Performance of Initial Public Offerings », Journal of Finance , vol.  46, p. 3-27.

Rosenthal L. and Young C. (1990), « The Seemingly Anomalous Price Behavior of Royal Dutch/Shell and Unilever N.V./PLC », Journal of Financial Economics , vol.  26, n° 1, p. 123-141.

Samuelson P. (1965), « Proof that Properly Anticipated Prices Fluctuate Randomly », Industrial Management Review , vol.  6, p. 41-49.

Shleifer A. and Vishny R.W. (1997), « The Limits of Arbitrage », Journal of Finance , vol.  52, p. 35-55.

Smith V.L., Suchanek G.L. and Williams A.W. (1988), « Bubbles, Crashes and Endogenous Expectations in Experimental Spot Asset Markets », Econometrica , vol.  56, p. 1119-1151.

Smith V.L., Van Boening M. and Wellford C.P. (2000), « Dividend Timing and Behavior in Laboratory Asset Markets », Economic Theory , vol.  16, p. 511-528.

Spyrou S., Kassimatis K. and Galariotis E. (2007), « Short-Term Overreaction, Underreaction and Efficient Reaction: Evidence from The London Stock Exchange », Applied Financial Economics , vol.  17, p. 221-235.

Tai Y.N. (2014), « Investor Overreaction in Asian and US Stock Markets: Evidence from the 2008 Financial Crisis », International Journal of Business and Finance Research , vol.  8, p. 71-93.

Theissen E. (2000), « Market Structure, Informational Efficiency and Liquidity: An Experimental Comparison of Auction and Dealer Markets », Journal of Financial Markets , vol.  3, p. 333-363.

Truong C. (2011), « Post-Earnings Announcement Abnormal Return in The Chinese Equity Market », Journal of International Financial Markets, Institutions and Money , vol.  2, p. 637-661.

Weber M. and Welfens F. (2009), « How Do Markets React to Fundamental Shocks? An Experimental Analysis on Underreaction and Momentum », SSRN Working Paper Series . University of Mannheim - Department of Banking and Finance.

Zhang L. (2006), « Efficient Estimation of Stochastic Volatility Using Noisy Observations: A Multi-Scale Approach ». Bernoulli , vol.  12, n° 6, p. 1019-1043.

Zheng S.X. (2007), « Market Underreaction to Free Cash Flows from IPOs », Financial Review , vol.  42, p. 75–97

1 Other definitions of market efficiency have been formulated. For example, Jensen (1978) argues that a market is efficient when no investor can make substantial gains by speculating on the basis of information that is available on this market. The stock price incorporates all relevant information so that an investor cannot, by buying or selling this stock, make any profit greater than the transaction costs. While this definition includes transaction costs, Malkiel (1992) proposes another definition closely related to Jensen’s (1978): a market is efficient if it fully and correctly reflects all information related to stocks. In such a market, prices would thus be unaffected if all information was revealed to all market participants.

2 Studies in financial behavior show that investors are subject to cognitive biases that influence their decisions and consequently the formation of prices on financial markets (Barberis et al. , 1998; Daniel et al. , 1998; Grinblatt and Han, 2005).

3 In his article published in 1998, Fama suggests, “ Market efficiency must be tested jointly with a model for expected (normal) returns, and all models show problems describing average returns ” (p. 285).

4 For a summary of the experimental method, see in particular Plott (1991), Davis and Holt (1993), Cadsby and Maynes (1998) and Nuzzo and Morone (2017).

5 In experiments conducted at the University of Innsbruck (Austria), Kirchler and Huber (2007) selected a risk-free interest rate and a risk‑adjusted interest rate of 2% and 8.5% respectively.

6 For this third treatment, we are more restrictive on the number of subjects in each session. In ensure that no information level has any more impact on the trading, we have set 3 subjects for each information level, which implies necessarily 12 subjects for each experimental session. This is not the case for T1 and T2. In these two treatments, there is only one information level and the number of subjects has no impact on the trading prices. Note that the number of subjects of all the experimental sessions of the three treatments will not be included in the analysis that follows.

7 We assume that the risk-adjusted interest rate remains unchanged for the 3 treatments and for the 4 information levels of the T3 treatment (Kirchler and Huber, 2007, 2009). We inform the subjects of the risk-adjusted interest rate since we do not study here their risk preferences, but the adjustment of prices to the arrival of new information.

8 Similarly, Hanke et al. (2010) and Kirchler et al. (2011) test market efficiency by using the absolute deviation between the average price within a trading period and the fundamental value.

9 In the study by Theissen (2000), an improvement in prices between opening and closing is found in three of six experimental double-auction markets.

10 Theissen (2000) shows that correction of price errors is non-significant in the fixing market (negative value) and double-auction market (positive value). In contrast, this correction is significant in the dealer market.

11 Time series regressions are privileged here since they enable us to compare different sessions of each treatment. Panel data regressions were also conducted. They led to similar results for the three treatments. These results are not presented for reasons of brevity, but are available from the authors upon request.

12 Another explanation of this result is as follow: The dividend series are generated as a random walk. This implies that some dividend increases are followed by dividend decreases. This information is known in T1 and the subjects trade on the basis of this information (increase of the dividend for the current period and decrease of the dividend thereafter). However, T2 subjects are only informed of the dividend of the current period. This implies that a dividend increase in T2 will be more visible in prices. The reasoning is the same for dividend decreases followed by dividend increases.

Electronic reference

Mondher Bouattour and Isabelle Martinez , “ Efficient market hypothesis: an experimental study with uncertainty and asymmetric information ” ,  Finance Contrôle Stratégie [Online], 22-4 | 2019, Online since 09 December 2019 , connection on 06 June 2024 . URL : http://journals.openedition.org/fcs/3821; DOI : https://doi.org/10.4000/fcs.3821

About the authors

Mondher bouattour.

CERIIM & LGCO - University of Toulouse 3 Paul Sabatier [email protected]

By this author

  • Hypothèse d'efficience des marchés : une étude expérimentale avec incertitude et asymétrie d’information [Full text] Published in Finance Contrôle Stratégie , 22-4 | 2019

Isabelle Martinez

TSM Research - UMR 5303 CNRS University of Toulouse 1 Capitole University of Toulouse 3 Paul Sabatier [email protected]

  • Activité d’innovation et gestion des résultats comptables : une étude empirique sur le marché français [Full text] Published in Finance Contrôle Stratégie , 17-2 | 2014

The text and other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Full text issues

  • 2024 27-1  | NS-14
  • 2023 26-1  | 26-2  | 26-3  | 26-4
  • 2022 25-1  | NS-12  | 25-2  | 25-3/4  | NS-13
  • 2021 24-1  | NS-11  | 24-2  | 24-3  | 24-4
  • 2020 NS-7  | 23-1  | NS-8  | NS-9  | 23-2  | NS-10  | 23-3  | 23-4
  • 2019 NS-5  | 22-1  | NS-6  | 22-2/3  | 22-4
  • 2018 21-1  | NS-1  | 21-2  | NS-2  | NS-3  | NS-4  | 21-3
  • 2017 20-1  | 20-2  | 20-3  | 20-4
  • 2016 19-1  | 19-2  | 19-3  | 19-4
  • 2015 18-1  | 18-2  | 18-3  | 18-4
  • 2014 17-1  | 17-2  | 17-3  | 17-4
  • 2013 16-1  | 16-2  | 16-3  | 16-4
  • 2012 15-1/2  | 15-3  | 15-4

Call for papers

  • Call for papers - open
  • Call for papers - closed
  • Presentation
  • Recommandations aux auteurs
  • Charte éthique
  • Forthcoming articles

Informations

  • Publishing policies

RSS feed

Newsletters

  • OpenEdition Newsletter

In collaboration with

OpenEdition Journals

Electronic ISSN 2261-5512

Read detailed presentation  

Site map  – Syndication

Privacy Policy  – About Cookies  – Report a problem

OpenEdition Journals member  – Published with Lodel  – Administration only

You will be redirected to OpenEdition Search

Please enable javascript to view this site.

Turing Finance | June 6, 2024

  • Computational Finance Blog
  • Machine Learning Software
  • Open Market Data
  • Quant Education
  • Quant Software
  • Reading list
  • Turing Finance
  • Testing the Random Walk Hypothesis with R, Part One

StuartReid | On November 20, 2016

Whilst working on some code for my Masters I kept thinking, "it would be really awesome if there was an R package which just consumed a price series and produced a data.frame of results from multiple randomness tests at multiple frequencies". So I decided to write one and it's named  emh after the Efficient Market Hypothesis .

The emh package is extremely simple. You download a price series zoo object from somewhere (e.g. Quandl.com ) and then pass the zoo object into the is_random() function in emh. This function will return an R data.frame containing the results of many randomness tests applied to your price series at different frequencies / lags:

This is my first open source R package so I invite you to use the package and, if you encounter any issues or missing must-have features, please let me know of any them on the Github repository .  I will also be giving an R/Finance  talk about market efficiency this Thursday at Barclay's Think Rise in Cape Town, so please come through.

With that said, here is the outline for the rest of this article,

What is the Efficient Market Hypothesis?

What is the random walk hypothesis, why should you care about randomness.

  • How to install EMH in R from GitHub
  • Example of how to use emh

The Independent Runs Test

  • The Durbin-Watson Test

The Ljung-Box Test

  • The Breusch-Godfrey Test

The Bartell Rank-based Variance Ratio Test

The lo-mackinlay variance ratio test, conclusions and future plans.

This article is part one of at least three parts. In the parts that follow I will continue going through various statistical tests of randomness and explain if and how they relate to the ones we have already covered.

Background Information and Context

I've written about market efficiency and randomness for a while now (since April of 2015) and over that time my understanding of the topic has grown exponentially. I recommend taking a look at my previous articles - which can be found here , here , here , and here - but if you don't have the time, the subsections below are designed to provide just the right amount of background information and context to understand the point of the package.

The Efficient Market Hypothesis  (EMH) is an economic theory which proposes that financial markets accurately and instantaneously take into account information about any given security into the current price of that security. The Efficient Market Hypothesis was introduced by Professor Eugene Fama from 1965 to 1970 . If true, actively trading securities in the market based on historical information cannot be used to generate abnormal returns. Abnormal returns are defined as consistent returns over-and-above those produced by the market which were obtained whilst taking on less risk than that of the market.

The Father of Modern Empirical Finance, Eugene Fama

The Efficient Market Hypothesis does not say you can't beat the market in terms of cumulative return. Theoretically that's "easy", you can just buy a leveraged index ETF and hold on to your pants [1] . What the Efficient Market Hypothesis says is that there is no free lunch . If you want higher returns, you need to take on higher risk.

The Efficient Market Hypothesis distinguishes between weak, semi-strong, and strong form efficient markets according to the subset of information which the market takes into account in current security prices.

Weak form efficient markets take into account all historical price data in current security prices. Semi-strong form efficient markets take into account all relevant publicly available information in current security prices. Strong form efficient markets take into account all relevant (even insider) information in current security prices.

Market efficiency is a by-product of market participation by information arbitrageurs. Information arbitrageurs are economic agents which buy undervalued assets and sell overvalued assets based on new information as it comes out. In so doing these information arbitrageurs reflect the new information into security prices.

If markets were perfectly efficient the expected return of being an information arbitrageur is zero therefore markets cannot be perfectly efficient (see Grossman and Stiglitz 1980 ). Economic rents earned by information arbitrageurs are therefore earned because of the "inefficient" actions of noise traders (see Black 1986 ). 

Noise traders are economic agents which buy and sell assets for reasons other than new information. An example is a large insurance company which liquidates some of its holdings to pay out a large insurance claim.  Efficient markets cannot exist without both information arbitrageurs and  noise traders. Passive index investors are, in my opinion, just another form of noise trader ... feel free to disagree in the comment section below ;-).

The Random Walk Hypothesis is a theory about the behaviour of security prices which argues that they are well described by random walks, specifically sub-martingale stochastic processes. The Random Walk Hypothesis predates the Efficient Market Hypothesis by 70-years but is actually a consequent and not a precedent of it. 

If a market is weak-form efficient then the change in a security's price, with respect to the security's historical price changes , is approximately random because the historical price changes are already reflected in the current price. This is why randomness tests are typically used to test the weak-form efficient market hypothesis.

I say "approximately" random because even if the market is efficient you should - in theory at least - be compensated for taking on the risk of holding assets. This is called the market risk premium and it is the reason buy-and-hold investing and index investing don't have expected returns equal to zero in the long run.

Consider the graph below. The log price of the Dow Jones Industrial Average from 1896 to 2016 is shown in black. If the market was truly random this line would not consistently increase like it does.

The market goes up because investors deserve to be compensated for the risk they took when they invested in the stock market over some other investment e.g. cash. This return is called the equity risk premium and it has been approximated by a compounded 126-day rolling average return in the graph [2] (the grey line). The red line represents the compounded excess / residual return of the market over our approximation of the equity risk premium.

Assuming our approximation of the market risk premium is correct - which it isn't - the grey line represents the market and it is what you should expect to have made. It is the signal. The red line should just be noise or a Martingale process.  Thinking along these lines we soon realise that there are a few ways to test the random walk hypothesis:

  • Predict or find statistically significant patterns in the equity risk premium (market timing),
  • Predict or find statistically significant patterns in the residual returns (not prices),
  • Predict or find statistically significant patterns in the sign or rank of the residual returns, or
  • Use non-parametric statistical tests of randomness which factor in the equity risk premium a.k.a drift.

Most statistical tests of randomness boil down to approaches (2), (3), or (4). The purpose of the emh R package is to make correctly running all of these statistical tests on financial price time series as easy as possible :-).

Note that approaches (1), (2), and (3) are also essentially what active investors try to do on a daily basis! Therefore, it shouldn't come as a surprise that converting any test of the Random Walk Hypothesis to a test of the Efficient Market Hypothesis essentially involves testing whether the identified patterns are also economically significant . An economically significant pattern is one which can be exploited to generate abnormal returns .

A lot of people ask me why I am so obsessed with randomness. I am obsessed with randomness because all forms of investing can be easily understood in the context of market efficiency and randomness testing. This is an over-simplification, but here's how I see the world of investing through the lens of randomness and market efficiency,

Classical Mean Variance Portfolio Optimization (MVO) assumes that security prices are stationary random walks that are fully described by their first two moments ... hence the name mean variance  optimization.

Quantitative asset pricing models argue that security prices are random and can be effectively modelled by stochastic processes. These stochastic processes are often used in Monte Carlo simulations to price assets.

High Frequency Traders and Arbitrageurs argue that security prices are not random at high frequencies either because they exhibit patterns or because the law of one price is violated across geographic regions.

Technical Analysis argues that security prices and volume data are not random at any frequency because they exhibit economically significant patterns which are identifiable and exploitable using deterministic technical indicators.

That said, no matter how powerful your models are, if security prices are random with respect to your dataset you will never be able to produce abnormal returns using it. This is the reason why I think you should care about randomness tests. They can help identify inefficient securities / markets, useful frequencies, and even useful datasets.

Comments on the Above

[1] I actually have some serious concerns about leveraged ETP's. So do not interpret this statement as financial advice. It is most definitely not. I might write a blog post about this sometime soon.

[2] Most randomness tests actually work on the residuals between the data and a linear regression fitted to that data. The emh package allows the user to decide how they want to calculate residual returns ... but personally I think that computing the residual returns by subtracting a moving average is more accurate because, firstly, it does not assume that the risk premium is constant and, secondly, using a linear regression makes the implicit assumption that you know what the parameter values of the linear regression are upfront, which is basically a form of look-ahead bias.

Introduction to the emh R Package

There are many randomness tests out there and many of them have been used to test the efficiency of markets. However, most randomness tests have biases and gaps. What may be random according to one test may be non-random according to another. This is the reason why industries outside of finance which rely on secure random number generation (such as the information security industry) typically make use of large batteries of randomness test suites to conclude anything about the randomness of a particular sequence.

Quantitative finance should aim to do the same and that is where the emh package comes in. emh aims to provide a simple interface to a suite of randomness tests commonly used for testing market efficiency.

How to install emh in R from GitHub

I am only planning on uploading the emh package to CRAN once I have added about 15 randomness tests and 5 or 6 stochastic process models, so for now the package can only be installed from GitHub via devtools,

Example Application to the S&P SMA Index

Originally I was planning on showing a demonstration of the package in this blog post, however, Jupyter notebooks are far better suited to the task that this website. As such you can view an example of the package using the Jupyter Notebook Viewer or directly in the GitHub repository in the examples directory . You can also clone the repository and open up the notebook on your own Jupyter notebook server . Please let me know if you have any problems.

The Six Statistical Tests in emh v0.1.0

In emh v0.1.0 I have included six simple randomness tests which are used all the time when studying market efficiency. Generally speaking there are five types of randomness tests - runs tests, serial correlation tests, unit root tests,  variance ratio tests, and complexity tests. In version 0.1.0 there is one runs test, three serial correlation tests, and two variance ratio tests. In subsequent versions there will be many more tests added in each category.

I wrote about runs tests before on this blog in my second randomness article, Hacking The Random Walk Hypothesis with Python , you can read what I had to say here and here .   To put it very simply, the runs test is a non-parametric test (meaning that it does not assume much about the underlying distribution of the data) which works on binarized returns. Binarized returns are returns which have been converted to binary i.e. 1 or 0 depending on whether they were positive returns (+) or negative returns (-). A run is any consecutive sequence of either 1 (+) or 0 (-),

Note that this randomness test is conditional  on the number of 1's and the number of 0's. Therefore drift, the general tendency of markets to go up over time rather than down, does NOT impact the results. The number of 1's in the sequence could be 90% and the above statement would still hold true. Furthermore, because the runs test only deals with the sign of the return and not its magnitude, it is not affected by stochastic volatility.

That having been said, if patterns exist in the magnitude  or size of returns in either direction over time, such as would be the case in a mean-reverting or momentum-driven market, the runs test will not be able to identify these. 

The Durbin-Watson test is named after James Durbin and Geoffrey Watson. Their test looks for the presence of autocorrelation, also known as serial correlation, in time series. Autocorrelation is the correlation between a time series and itself lagged by some amount. If a time series exhibits statistically significant autocorrelation it is considered non-random because it means that historical information can be used to predict future events.

The Ljung-Box Test is named after Greta Ljung and George Box, source of the famous quote - "all models are wrong, but some are useful". The Ljung-Box test is a test of whether any autocorrelation in a group of autocorrelations of a time series are significantly different from zero. This is expected when the financial time series being tested exhibit either momentum or mean-reversion. The Ljung-Box test statistic is calculated as follows,

The Durbin-Watson Test and the Ljung-Box Test are used very often, however some studies indicate that they are biased toward the null hypothesis. In other words, they are more likely to say that a time series is random than non-random ( see chapter 6 ). Such biases are actually the topic of a journal article I am working on :-).

For more information on the above statistical tests for autocorrelation I recommend listening to Professor Ben Lambert's videos on testing for autocorrelation. I found them to be very helpful and relatively easy to understand. 

  • Serial Correlation Testing, Introduction

In 1941 John Von Neumann, a hero of early classical computing and mathematics in general, introduced a test of randomness based on the ratios of variances computed at different sampling intervals. This test, known as the Von Neumann Ratio Test, is a very good test of randomness under the assumption of normality.

The Heteroscedastic-consistent Variance Ratio test developed by Andrew Lo and Jonathan MacKinlay in 1987 is perhaps the most interesting and complex randomness tests I have encountered. I wrote about this test in my third randomness article, Stock Market Prices Do Not Follow Random Walks - named after Lo and MacKinlay's paper by the same name. I highly recommend reading the above article as I will not be recapping the test here ...

I will, however, make one comment about this test. The test is only valid if security price changes have finite variances. In this context a security with infinite variance is one where the estimate of variance does not converge according to the central limit theorem. A number of people, upon reading this, argued that security price changes have infinite variances. This is essentially a throwback to Mandelbrot's Stable Paretian Hypothesis from the 1960's.

Those people are wrong for one simple reason: if daily security prices have infinite variance then so must weekly, monthly, quarterly, and yearly price changes. Why? Because the characteristic exponent of any stable distribution is invariant in the sampling interval. However, what we observe in reality is that lower frequency returns do have finite variances . Therefore daily returns  cannot be distributed according to any stable distribution.

Mandelbrot himself admitted in later years that the "infinite variances" (variances which do not converge to a true estimate) observed in daily returns are likely to be a symptom of conditional heteroscedasticity ... which is what we generally assume when we modelling security prices using Auto-regressive Conditional Heteroscedastic (ARCH) models and this is also, to some extent , what Lo and MacKinlay were controlling for in their test .

Randomness testing - and the emh package by extension - can help to identify inefficient markets, inefficient frequencies, and information-rich datasets.  The emh R package is still very new, and I will be contributing to it considerably during the December holidays and in 2017. I plan to add many more randomness tests in each of the five categories: runs tests, serial correlation tests, unit root tests,  variance ratio tests, and complexity tests.

In the meantime, try it out! Let me know what you think and if you find any issues with the tests. Unlike the NIST suite which I coded up in Python back in 2015 there are no "unit tests" against which to test mine and other's implementations, so bugs are probably an inevitability. Lastly, if you are in Cape Town in Thursday evening and you find this stuff interesting the please try to come through to the R/Finance workshop . 

Share This:

Previous Story

  • The Promise of Computing

This is the most recent story.

November 20, 2016

Really nice work. Just a quick, and perhaps stupid, question - if these tests were applied to an equity curve generated by a trading strategy, would it be theoretically justified to say that the underlying trading strategy is non-random?

Thanks, it's still very much a work-in-progress. I really appreciate your blog by the way. That is far from a silly question :-), thank you for asking it! There is nothing wrong in that statement. However, we do need to think about what it means for a strategy's equity curve to be "non-random" according to the tests. The tests used in the emh package take drift into account so if your logged equity curve looks like a straight line then when we ask emh "is this random?" we are actually asking if the movements around the straight line are random or non-random. If that turns out that the movements are non-random then that would imply that there is information in the movements and theoretically it might be possible to improve upon the strategy. So if you have a great, straight-line looking log equity curve the best result you can hope for is for the package to say that it is random. That having been said, I used the word theoretically because even if the movements around the line are non-random they may not be economically significant. I hope that makes sense, it's a little bit round-a-bout!

Thank you for making this package, it looks like you have expended tremendous effort in producing it. I am very excited to try 'emh' as well as learn from your code.

I cloned your git repo and attempted to build the package in RStudio. Unfortunately, I am on Mac OS X and it looks like the package is coded for Linux. I'll try to make a port to Mac OS X.

But serious thanks for creating this package. Very impressive.

Hi Randy, thanks 🙂 it's still early stages though. You're quite right I almost exclusively use Linux ... so I'm not exactly sure what the problem is, but if you don't come right please email me at [email protected] and I will be happy to help you out.

Please ignore my previous. I had a bad value in my ~/.R/Makevars which was the culprit ... works fine! Stupid User Error ... sorry

November 21, 2016

Nice overview and nice package.

One point that you make above: "...if daily security prices have infinite variance then so must weekly, monthly, quarterly, and yearly price changes... However, what we observe in reality is that lower frequency returns do have finite variances. Therefore daily returns cannot be distributed according to any stable distribution."

I suggest you read the paper below for a simple but very effective counter-argument to the aggregational Gaussianity hypothesis of lower frequency returns: ( http://www.sciencedirect.com/science/article/pii/S2212567115007510 )

Thanks! I have not come across this counter-argument before, I'm looking forward to reading and understanding it :-).

Kind regards Stuart Reid

November 22, 2016

The best blog in quantitative finance! In R studio, I cannot install the package, I get this: *** arch - i386 Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared object 'F:/R-3.3.2/library/emh/libs/i386/emh.dll': LoadLibrary failure: %1 is not a valid Win32 application. when running devtools::install_github(repo="stuartgordonreid/emh") Please help, I have tried both 32/64 bit r versions, and much more....

November 23, 2016

Hey Stylianos, thanks for the heads up. I've responded to your issues on the git repository. Hopefully we can clear this issue up.

November 24, 2016

The package now runs fine on ubuntu and windows8,10. Great work, thank you again; I just read the roadmap, maybe we can cooperate,too.

Submit a Comment

Cancel reply.

Sign me up for updates from this blog!

This site uses Akismet to reduce spam. Learn how your comment data is processed .

The State of Turing Finance

Dear Readers,

In 2016 I stopped contributing to Turing Finance for career reasons. I had hoped to someday resume the blog, but my interests have evolved, and it would not be fair to you.

So, I have instead decided to build a new community over at nosible.com . This blog will remain up indefinitely, but it is, for all intent and purposes, shut down and will not be revived.

If you are interested in following my thoughts, you are welcome to follow the nosible blog or follow me on Twitter . Thank you for all the support over the years, you are awesome.

Yours sincerely, Stuart Reid

Turing Finance Mailing List

Most popular posts.

  • 10 misconceptions about Neural Networks
  • Regression analysis using Python
  • Algorithmic Trading System Architecture
  • Random walks down Wall Street, Stochastic Processes in Python
  • Stock Market Prices Do Not Follow Random Walks
  • Measures of Risk-adjusted Return
  • Hacking the Random Walk Hypothesis

Most Recent Posts

  • Lossless Compression Algorithms and Market Efficiency?
  • How to be a Quant
  • A Quant's view of CFA Level I
  • Alex Chinco's Blog
  • Colah's Blog
  • Compounding My Interests
  • Explained Visually
  • Flirting with Models
  • Gekko Quant
  • Off The Convex Path
  • Philosophical Economics
  • Quant at Risk
  • Quantocracy
  • Quants Portal
  • QuantStrat TradeR
  • Robot Wealth
  • Sebastian Rashka
  • The Alpha Architect
  • The Financial Hacker

Popular Tags

  • Search Search Please fill out this field.
  • Macroeconomics

Adaptive Market Hypothesis (AMH): Overview, Examples, Criticisms

Daniel Liberto is a journalist with over 10 years of experience working with publications such as the Financial Times, The Independent, and Investors Chronicle.

efficient market hypothesis python

Erika Rasure is globally-recognized as a leading consumer economics subject matter expert, researcher, and educator. She is a financial therapist and transformational coach, with a special interest in helping women learn how to invest.

efficient market hypothesis python

What Is Adaptive Market Hypothesis (AMH)?

The adaptive market hypothesis (AMH) is an alternative economic theory that combines principles of the well-known and often controversial efficient market hypothesis (EMH) with behavioral finance . It was introduced to the world in 2004 by Massachusetts Institute of Technology (MIT) professor Andrew Lo.

Key Takeaways

  • The adaptive market hypothesis (AMH) combines principles of the well-known and often controversial efficient market hypothesis (EMH) with behavioral finance.
  • Andrew Lo, the theory’s founder, believes that people are mainly rational, but sometimes can overreact during periods of heightened market volatility.
  • AMH argues that people are motivated by their own self-interests, make mistakes, and tend to adapt and learn from them.

Understanding the Adaptive Market Hypothesis (AMH)

The AMH attempts to marry the theory posited by the EMH that markets are rational and efficient with the argument made by behavioral economists that they are actually irrational and inefficient.

For years, the EMH has been the dominant theory. The strictest version of the EMH states that it is not possible to " beat the market " because companies always trade at their fair value , making it impossible to buy undervalued stocks or sell them at exaggerated prices.

Behavioral finance emerged later to challenge this notion, pointing out that investors were not always rational and stocks did not always trade at their fair value during financial bubbles , crashes , and crises . Economists in this field attempt to explain stock market anomalies through psychology-based theories.

The AMH considers both these conflicting views as a means of explaining investor and market behavior. It contends that rationality and irrationality coexist, applying the principles of evolution and behavior to financial interactions.

How the Adaptive Market Hypothesis (AMH) Works

Lo, the theory’s founder, believes that people are mainly rational, but sometimes can quickly become irrational in response to heightened market volatility . This can open up buying opportunities. He postulates that investor behaviors—such as loss aversion, overconfidence, and overreaction —are consistent with evolutionary models of human behavior, which include actions such as competition, adaptation, and natural selection .  

People, he added, often learn from their mistakes and make predictions about the future based on past experiences. Lo's theory states that humans make best guesses based on trial and error. This means that, if an investor's strategy fails, they are likely to take a different approach the next time. Alternatively, if the strategy succeeds, the investor is likely to try it again.

The AMH is based on the following basic tenets:

  • People are motivated by their own self-interests
  • They naturally make mistakes
  • They adapt and learn from these mistakes

The AMH argues that investors are mostly, but not perfectly, rational. They engage in satisficing behavior rather than maximizing behavior, and develop heuristics for market behavior based on a kind of natural selection mechanism in markets (profit and loss). This leads markets to behave mostly rationally, similar to the EMH, under conditions where those heuristics apply.

However, when major shifts or economic shocks happen, the evolutionary environment of the market changes; those heuristics that were adaptive can become maladaptive. This means that under periods of rapid change, stress, or abnormal conditions, the EMH may not hold.

Examples of the Adaptive Market Hypothesis (AMH)

Suppose there is an investor buying near the top of a bubble because they had first developed portfolio management skills during an extended bull market . While the reasons for doing this might appear compelling, it might not be the best strategy to execute in that particular environment.

During the housing bubble , people leveraged up and purchased assets , assuming that price mean reversion wasn't a possibility (simply because it hadn't occurred recently). Eventually, the cycle turned, the bubble burst and prices fell.

Adjusting expectations of future behavior based on recent past behavior is said to be a typical flaw of investors.

Criticism of Adaptive Market Hypothesis (AMH)

Academics have been skeptical about AMH, complaining about its lack of mathematical models . The AMH effectively just echoes the earlier theory of adaptive expectations in macroeconomics, which fell out of favor during the 1970s, as market participants were observed to most form rational expectations. The AMH is essentially a step back from rational expectations theory, based on the insights gained from behavioral economics.

Lo, A. " The Adaptive Markets Hypothesis: Market Efficiency from an Evolutionary Perspective ," Accessed May 31, 2021.

efficient market hypothesis python

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Your Privacy Choices

IMAGES

  1. Efficient market hypothesis: A unique market perspective

    efficient market hypothesis python

  2. Efficient Market Hypothesis (EMH): Definition and Critique

    efficient market hypothesis python

  3. Efficient Market Hypothesis

    efficient market hypothesis python

  4. Efficient Market Hypothesis

    efficient market hypothesis python

  5. Efficient Market Hypothesis: The Rational Approach

    efficient market hypothesis python

  6. Efficient Market Hypothesis Or EMH As Investment Evaluation Outline

    efficient market hypothesis python

VIDEO

  1. The Efficient Market Hypothesis explained#youtubeshorts #shorts #viral #india #business

  2. The 'Efficient Market Hypothesis (EMH)'

  3. EFFICIENT MARKET HYPOTHESIS

  4. efficient market hypothesis predicting stock market impact #dating #podcast #biotechnologist

  5. Efficient market hypothesis

  6. Warren Buffett: Efficient Market Theory Is Bullsh*t

COMMENTS

  1. Exploring Stock Market Seasonality Trends with Python

    Fama (1970) introduced efficient market hypothesis (EMH), stating the prices of securities fully reflect available information. Therefore, investors buying securities in an efficient market should…

  2. Shedding Light on Market Efficiency with ML and Python

    This challenges the weak form of the Efficient Market Hypothesis ... Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. ...

  3. A Comprehensive Study of Market Prediction from Efficient Market

    PyWavelets python package; see Lee et al. is used to apply various threshold values to the high-frequency coefficients. There is an optimization problem solved to maximize the noise extracted along with maintaining white noise characteristics. ... First, investigation on Efficient Market Hypothesis is performed by BDS test. This test rejected ...

  4. GitHub

    Shedding Light on Market Efficiency with ML and Python. Dive deep into the world of financial markets through a data-driven lens. Using Python and machine learning models, this repository explores different levels of the Efficient Market Hypothesis (EMF) across various asset classes.

  5. Testing market efficiency in Python: Runs test

    Today we are going to build from scratch a Python code that would allow us to flexibly apply runs test to a market of our choice to test for market efficienc...

  6. SaranP08/Efficient-Market-Theory-Dashboard

    A dashboard in python to perform Efficient Market Theory analysis. SYNOPSIS: The Efficient Market Theory Analysis project aims to assess the efficiency of financial markets through statistical analysis of historical stock price data.The project employs the Runs Test, a non-parametric statistical test, to determine whether stock price movements exhibit randomness or if they are influenced by ...

  7. The efficient market and alpha

    The efficient market and alpha. The alpha ( α) left over by the regression is unexplained performance due to unknown factors. In a regression model, this is simply the coefficient of the intercept. There are two general schools of thought as to why: The model simply needs to be expanded. When you have found all of the missing economic factors ...

  8. Using Machine Learning to Test Efficient Market Hypothesis

    The purpose of this machine learning experiment is to justify or reject the efficient market hypothesis (EMH). EMH was developed by Eugene Fama who argued that asset prices fully reflect all known information and follow a random walk. Random walk was defined first in 1905 by bio-statistician Karl Pearson who defined a random walk as a process ...

  9. Understanding Quantitative Trading C1.0:Efficient Market Hypothesis

    Future work — write a Python code to test efficient market hypothesis (EMH) using Hurst exponent or the Fractal Dimension Index, Take the dataset after March 15th, 2020 (post covid analysis) to ...

  10. GitHub

    Efficient Market Hypothesis tests in Python. Contribute to PawelChruszczewski/EMH development by creating an account on GitHub.

  11. Efficient Market Hypothesis (EMH): Definition and Critique

    Aspirin Count Theory: A market theory that states stock prices and aspirin production are inversely related. The Aspirin count theory is a lagging indicator and actually hasn't been formally ...

  12. A procedure for testing the hypothesis of weak efficiency in financial

    The weak form of the efficient market hypothesis is identified with the conditions established by different types of random walks (1-3) on the returns associated with the prices of a financial asset. The methods traditionally applied for testing weak efficiency in a financial market as stated by the random walk model test only some necessary, but not sufficient, condition of this model. Thus ...

  13. An Algorithm for Testing the Efficient Market Hypothesis

    The efficient market hypothesis (EMH) contradicts this approach by stating that all public information in the market is immediately reflected in prices; therefore, no arbitrage can be made based on historical data. Time series is split in two parts. The trading system with several parameters is applied in-sample over the training period and ...

  14. Testing market efficiency in Python: Variance ratio test

    Today we are going to build from scratch a Python code that would allow us to flexibly apply variance ratio test with a Chow-Denning multiple testing adjustm...

  15. Market Efficiency: The Efficient Market Hypothesis

    The efficient-market hypothesis (EMH) asserts that financial markets are "informationally efficient". In consequence of this, one cannot consistently achieve returns in excess of average market returns on a risk-adjusted basis, given the information available at the time the investment is made. There are three major versions of the hypothesis ...

  16. 11.5 Efficient Markets

    Financial economists have devised three forms of market efficiency from an information perspective: weak form, semi-strong form, and strong form. These three forms constitute the efficient market hypothesis. Believers in these three forms of efficient markets maintain, in varying degrees, that it is pointless to search for undervalued stocks ...

  17. Efficient market hypothesis: an experimental study with uncertainty

    The efficient market hypothesis has been the subject of a wide debate over the past decades. This paper investigates the market efficiency by using laboratory experiments. We ran three experimental treatments with two distinguishing dimensions: uncertainty and asymmetric information. Results show that both uncertainty and information asymmetry affect the level of market efficiency with ...

  18. Efficient-market hypothesis

    A replication of Martineau (2022). The efficient-market hypothesis ( EMH) [a] is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information.

  19. What Is the Efficient Market Hypothesis?

    The efficient market hypothesis begins with Eugene Fama, a University of Chicago professor and Nobel Prize winner who is regarded as the father of modern finance. In 1970, Fama published ...

  20. Testing the Random Walk Hypothesis with R, Part One

    The Efficient Market Hypothesis (EMH) is an economic theory which proposes that financial markets accurately and instantaneously take into account information about any given security into the current price of that security.The Efficient Market Hypothesis was introduced by Professor Eugene Fama from 1965 to 1970.. If true, actively trading securities in the market based on historical ...

  21. What Is the Efficient-Market Hypothesis? Overview & Criticisms

    The efficient-market hypothesis claims that stock prices contain all information, so there are no benefits to financial analysis. The theory has been proven mostly correct, although anomalies exist. Index investing, which is justified by the efficient-market hypothesis, has supported the theory. That line set off a theoretical explosion in ...

  22. Adaptive Market Hypothesis (AMH): Overview, Examples, Criticisms

    Adaptive Market Hypothesis: A theory posited in 2004 by MIT professor Andrew Lo. It combines principles of the well-known and often controversial Efficient Market Hypothesis with principles of ...

  23. Tests of the Efficient Markets Hypothesis

    Professor David Hillier, University of Strathclyde;Short videos for students of my Finance Textbooks, Corporate Finance and Fundamentals of Corporate Finance...