FinTech before Blockchain
FinTech without Blockchain. The financial technology [FinTech] of today hovers around three major disruptive innovations, they are high frequency and systematic trading, machine learning, and blockchain. Of the three, high-frequency trading [HFT] although “new” can be considered the most explored. In High-Frequency Trading, Budish, Cramton, & Shim (2015) shared that high-frequency trading helps to provide two benefits. Firstly, increased liquidity due to the rapidity of trades processed by computers, traders and clearing processes, are both automated by algorithms. The authors further shared that the effects of HFT probably help contribute to the decreases in bid-ask spreads from 1.46 percent in 1980 to 0.11 percent in 2006. Most remarkably the authors introduced the concept of discrete-time trading for which all orders received are traded upon in batches of 100 milliseconds, which would, in fact, be more “fair” for orders that were received in that time tier. They opined that HFT in the markets today allows for “mechanical” arbitrage opportunities within those split seconds. The existence of this specific issue [mechanical arbitrage opportunity] actually drains instead of help provide liquidity to the markets (Budish, Cramton, & Shim, 2015). This is in contrary to what most would believe, in the traditional sense many individuals believe that the greater the velocity of trade the more liquid markets would be. However, the authors noted that arbitrage rent obtained from the inefficiencies of different players in the HFT field removes liquidity from the system. Firms such as Renaissance Technologies [RenTech] have benefitted and built a long-standing reputation based on HFT and systematic algorithms designed to benefit from existing financial frameworks. Known as “black box” trading the firm prides itself on knowing all risks involved when entering into the trade, as such they have an exit strategy for every trade.
If greater velocities/speed cannot provide more liquidity and in proxy stability to markets, what can? This brings the second application to our attention, machine learning [ML] of artificial intelligence [AI] application. Machine learning came out of the exploration of having machines learn from data. Previously a subsection of AI, ML has broken away from AI as the latter was headed toward expert knowledge replication and ML was headed towards the use of more statistics in general (Langley, 2011). At its core, machine learning is fundamentally based off ordinary least squares [OLS] and regression, the field attempts to have an artificial intelligence be able to make predictions of the future given large tranches of data.
Clustering allows users to draw empirical results and graphically display data. Most used within mathematical finance through variance-covariance matrices, it is noted that the noise within such applications are retained and the use of machine learning algorithms to develop predictive analysis models. In hierarchical clustering, data takes the form of a tree for which each data point has a single object.
The frontier of ML resides with Deep Learning also known as deep structured learning, it can be supervised or unsupervised. Deep learning utilizes tremendously large data sets which are not categorized and tries to identify systems from this exact data set. Deep learning systems are different as they contain more layers for which input/output are differentiated from each other. Layers can be hidden and nonlinear, results of ML can also not be fed back to the system. Known flaws of deep learning utilizing neutral networks, using too few neurons in the hidden layer will result in underfitting whereas too many neurons can result in overfitting. This occurs when there are too many neurons within the hidden layer, this results in a lack of data to train all existing neurons (Heaton, 2008). Although highly successful in certain fields such as speech recognition and drug design, deep learning has been remarkedly absent in finance and markets in general. Its greatest flaw is that the field lacks theories to back it up.
I’ll leave the rest of ML / AI to the true experts! Let’s move on
Taking the World by Storm
A fine example was the match between Lee Sedol and Google’s DeepMind algorithm named “AlphaGo” which took place between 9 – 15 March 2016. Lee Sedol, a South Korean grandmaster of the strategy board game of GO lost to Google’s DeepMind AlphaGo 1 to 4. Many were surprised at the success of AlphaGo, but a little-known fact was that AlphaGo needed a huge amount of computing power in order to match Lee. The researchers of AlphaGo revealed that they used 25 times the normal computing power of a retail desktop, this amounts to 25 GPUs and 25 computing power (Silver et al., 2016).
Correspondingly, Elon Musk founded OpenAI in 2015 in order to advance and share progress on AI with individuals. Elon Musk’s OpenAI developed a gaming bot named “Dendi” which was specially designed to participate in the game known as DOTA 2 [Defense of The Ancients 2], the permutations and options of this game are comparable to a game of GO. “Dendi” can be considered a supervised machine system for which the developers provided preferred inputs and outputs such as (i) do not die (ii) do not lose health (iii) do not lose the game. Dendi challenged and successfully beat the top 1 versus 1 player in the world and the top overall player in the world in the Dota 2 “The International” World championships which concluded on August 2017. “Dendi” remains undefeated today.
The players which were selected to match up with Dendi shared that they observed Dendi continually used the same tactics even though it had been killed while doing so. Dendi learned from past lessons and optimized its performance rather than change behaviors drastically (Mashable, 2017). This is in stark contrast to what investors’ and through generalization humans would do, decisions can be changed but perimeters cannot. An example would be that humans tend to be All or Nothing [1 or 0] in terms of learning, a human either learns  or fails to learn . While Dendi’s ML capability too has 1 or 0, its ML system provides many options such as [1 or 0.99 or 0.98 … or 0.01] and so on. Dendi is able to internalize the mistakes committed and always remember the mistake through learning.
In financial markets terms, 1 or 0 can be seen as investing in the introduction of Unit Trusts/Exchange Traded Funds. The introduction of ETFs allows for redistribution of risks such that it is nearly zero, this, however, is still different from the use and employ of black box ML for which actors believe that markets are inefficient. Without delving into the specificities, ML algorithms provide a platform for which investors and managers of such algorithms can be allocated more certainty against other human actors. This is the fundamental reason for the rise of Machine Learning in general, less the prediction function of ML. ML provides a certain level of certainty back by data. It can be erroneous however it is better than some human plucking thoughts out of the air.
Nassim Nicholas Taleb proposes that systems and frameworks should be resilient in face of challenges. In his book Antifragile, Taleb advocates the adoption of Hydra which is the Greek ancient representation of antifragility. In other words, even if one is surrounded by chaos or pain, it or he/she thrives and in fact gain from that experience. Like the Hydra, when one head is chopped off, two rises up to the challenge. It would not be a stretch to name “Dendi” and Learning systems which are so optimized like it a hydra.
In addition to advocating for antifragility, Taleb shared the concept of the Phoenix, for which it is robust in face of challenges. The next FinTech alternative attempts to address this problem, and remain robust in face of challenges and attacks. Blockchain attempts to completely strip the need for trust out of the system.
- Blockchain Technology
can be explained as a decentralized network with a memory (Buterin, 2017). The blockchain of today is simply the combined of several existing technologies that were available in 2008. At its core, blockchain has its roots in cryptography and in a research originally proposed by Dwor and Noar in 1992. Adopted by Adam Back in 1997 as Hashcash, Hashcash was invented to protect e-mail inboxes from spammers using a small amount of CPU power to produce a certain “hash stamp.” The act of using the small negligible of CPU power to produce the “Hashcash stamp” is known as a Proof-of-Work. A spammer which attempts to spread his e-mail messages to adopters of this system will face a huge challenge of needing massive amounts of CPU hardware simply to send spam e-mail. The mini amounts of brute-forcing stack up exponentially for the spammer. Given the technology of 1997, it was to become extremely costly. This transfers the proof of burden/negative externality onto the spammer, incentivizing him/her to stop spamming. Adam Back’s Hashcash uses the algorithm 160-bit Secure Hashing Algorithm 1 [SHA-1]. SHA-1 is a cryptographic algorithm, first introduced by the National Security Agency [NSA] in 1993, the algorithm compares the result of an output to the expected output and attempts to identify the legitimacy of the input. Back’s implementation of proof-of-work [PoW] coupled with advances by the NSA would contribute to the foundation of the major cryptocurrencies we observe today, i.e. Bitcoin’s SHA 2, Ethereum’s SHA-3 variant.
Articles similar to this
Read about Professor Nicola Dimitri’s “Bitcoin Mining as a Contest”
[Dr. Nicola Dimitri, Professor of Economics, University of Siena. Life Member of Clare Hall College (Cambridge-UK), and Visiting Professor at the Institute for Advanced Studies (IMT) Lucca (Italy)]
Cryptocurrency Bubble! says Robert Shiller”