Deep Learning for Stock Price Prediction Explained

Deep Learning for Stock Price Prediction Explained

As part of my work on Quantized Classifier I have built a set of examples showing how to use Deep Learning to predict future stock prices. Tensorflow is a popular Deep Learning library provided by Google. This article explains my approach, some terminology and results from a set of examples used to predict future stock prices. The code is free so please download and experiment.

Related Articles: Stock Price Prediction Tutorial, Analyzing Predictive Value of Features, FAQ, How to make money with Quantized Classifier

General Purpose Tensorflow CNN Classifier

The first step was building a general purpose utility that could read a wide variety of CSV input files submit them to Tensorflow’s CNN for classification. The utility needed to work across a wide range of inputs from classifying heart disease to predicting stock prices without any changes to the code. The utility I built is called When used for stock price prediction it reads two CSV files a training file and a Test file. It outputs classification results to the console. It builds a deep learning CNN using the training data and then uses the CNN to classify rows from the test data file. It generates output showing how successful it was in the classification process.

You need to install Tensorflow, TLearn and python 3.52 to run these samples.

Classes Explained

In supervised learning we look at a given row and assign a class. You can think of a class as a grouping of rows. Any classification project requires at least two classes such happy or sad, Alive or Dead. For these examples I use class=0 to indicate a BAR that failed to meet the goal. Class=1 if the BAR did meet the goal.

We are seeking bars that will rise in price enough to meet a profit taker goal before they would encounter a stop loss or that rise at least 1/2 of the way to the goal before reaching a maximum hold time.

To determine the class we look ahead at future bars and determined if the price for that symbol has moved in a way that would meet our goal. For these stock examples we have three factors for each goal.

  1. A Percentage rise in price that would allow exit with a profit taker order.
  2. A point where if the price drops by more than a given percentage it will exit the trade with stop limit.
  3. A maximum hold time where if the price has not risen to at least 1/2 of the way to goal before the hold time expires it is considered a failure.

Bars that satisfy these rules are assigned a class of 1 while bars that fail are assigned class 0. The classes are used by the learning algorithm to train the machine learning model. They are also used to test verify the classification accuracy of the model when processing the test file. In a production trading system the predicted class is used to generate buy signals that could be executed either by a human or an automated trading system.

In our examples we seek to maximize precision and recall of class 1 to find successful bars. The premise is that we will eventually use the predicted class for current bars to generate buy signals.

In a more sophisticated system we could have dozens or hundreds of classes but in this instance we are only seeking a signal about whether it is a good time to buy a specific symbol using a specific goal set. It would be relatively easy to run the engine across hundreds of stocks so you always had something available to buy.

The utility we use to generate the training and test data files with the classes assigned is You need to have downloaded the Bar data before running it. You can use your own bar data or will download it from yahoo. These utilities are only samples but they could be a good example of where to start building a more sophisticated system.

The class computation used in these tests is intentionally simple. Feel free to take these samples and extend them with your own creative enhancements. Unique combinations of different classification logic and different feature engineering can allow a single engine like Quantized classifier to produce millions of different trading signals all customized to each specific users trading preferences.

Base Probability and Lift

When evaluating any machine learning system there are several numbers we use to measure how effective the system is.

  • Base Probability – Given a input test data set a certain portion of bars will be class 0 and a certain number will be class 1. If 33 bars out of 100 met the goal then the base probability that any bar will be a member of class 1 is 33 / 100 = 0.33
  • Precision – When the system runs across test data it attempt to classify bars. How often the actual class of a given bar matches the predicted class is called precision. If the system predicted that 27 bars would be class 1 and only 22 bars actually were class 1 then the precision for class 1 is 22 / 27 = 0.8148 The precision can also be measured for all records in a system but in this context we care most about the precision of class 1 because we plan to use it to generate buy signals.
  • Recall – When the system evaluates test data it will attempt to find all the bars that it should classify as a given class. In reality it will only find a fraction of the bars that are available. Recall is computed as a ratio of those it classified correctly and the total number of records of that class. A general rule is that you can increase recall at the expense of precision or increase precision at the expense of recall. Better engines and improved feature engineering are used to increase both. Recall is typically computed on a class by class basis. We care most about recall for class 1 because higher recall will generate more buy signals. If there were 33 bars available and the system correctly found 22 bars that it correctly classified as class 1 then recall for class 1 would be 22 / 33 = 0.66.
  • Lift – Lift is the measure of how much better precision is than base probability. Lift is important because it allows the relative improvement in prediction accuracy to be compared even when base probability changes. If Base probability is 0.33 and precision is 0.8148 then lift = 0.8148 – 0.33 = 0.4848. I use lift to help guide exploration of features. If I can increase lift without a significant reduction in recall then it is normally a good change.

Understand Probabilities in context:

A common mistake is to look at a precision such as 55% out of context of the Goal. It is incorrect to look at 55% precision and say it is poor odds unless you understand how much you win versus how much you would loose. If the wins are bigger than the losses then you can remain profitable with a lower percentage of winning bars.

If you tell any gambler you will give them 50% odds and winning hands will earn 2 times as much as they loose with loosing hands they will gamble all week long.

The law of large numbers indicates that if gambler has a 55% chance of winning and they are betting $100 each time after a large number bets they they will have won 55 times and lost 45 times on average per 100 bets. They would have won 55 * $200 = $11,000 from the wining bets. They would have lost 45 * 100 = $4,500 from the loosing bets giving them a net profit of $6,500 As long as the magnitude of the win versus loss stays the same and the probability of win versus loss stays at 55% then they will continue making profit. The law of big numbers also indicates they could have a long run of losses in the short term and still average 55% wins in the long term so they need to manage the amount they bet using something like the Kelly criteria to avoid gamblers ruin. In this example even if the win % dropped little below 50% while the magnitude of win versus loss remained the same it would still be a winning system.

I used samples where the amount won is larger than the amount lost because I found that forcing a larger win magnitude helps isolate the signal from the noise. This helps the prediction system deliver greater lift. Greater lift normally comes at the cost of recall so we have fewer trades but I would rather run dozens of diverse strategies that earn more profit per trade than accept more losses with smaller profits per trade. We can still find enough trades but it may consume a bit more compute power to evaluate dozens or hundreds of strategies simultaneously.

Feature Engineering

Feature engineering is where some of the most important work is done in machine learning. Many data sets such as bar data yield relatively low predictive value in their native form. It is only after this data has been transformed that it produces useful classification.

In the context of our stock trading samples we used a few basic indicators which are applied across variable time horizons to produce machine learning features. They are:

  1. Slope of the price change compared to a bar in the Past
  2. Percentage above minimum price within some # of bars in the past
  3. Percentage below the maximum price within some # of bars in the past
  4. Slope of percentage above minimum price some # of bars in the past
  5. Slope of percentage below maximum price some # of bars in the past.

Each of these may be applied to any column such as High, Low, Open, Close or they can be applied against a derived indicators such as a SMA.

The utility that produces test and training files containing both the machine learning features and classes is called It is only intended as an example that you can modify to add your own creativity. I do not claim these are great features but they were good enough to demonstrate Quantized classifier and Deep Learning CNN delivering some lift and reasonable recall.

Feature engineering is an area with nearly infinite potential for creative thought. By using different combinations of features ML classifiers can produce radically different trading signals for different users. I encourage you to explore this area there are hundreds of indicators explained across thousands of trading books most of which can be converted into machine learning friendly features.

Deep Learning number of epoch explained

The Tensorflow flow Deep learning CNN (Convolutional Neural Network) a learning strategy based on how scientists think brains learn. The C portion essentially adds multiple layers to the NN which can allow them to perform better in ares where decision trees have previously dominated.

Just as biological systems learn best with repetition the CNN needs to see data multiple times while it is building its internal memory model it uses to support the classification process. Each repetition where the training data is re-submitted to the CNN engine is considered one epoch.

I have generally found minimum acceptable results are at 80 repetitions while the CNN seems to perform best with at least 800 repetitions when working with stock data. Each of these repetitions is what the Tensor-flow libraries call a epoch. For these samples I chose to use 800 epoch. For those that didn’t produce good results I increased the number of epoch to 2800.

Have fun and Experiment

Before you ask if I have tried it with X data set? Or if it has been hooked to broker X. The Engine is free, the examples are free. You are free to take them and test them with any data or configurations you desire. Have fun and please let me know what you learn. If you want help then I sell consulting services.

Sample Call



python ../data/spy-1up1-1dn-mh3-close.train.csv  ../data/spy-1up1-1dn-mh3-close.test.csv 800

Sample Output


CNN Related stock tests

Parm 0 = the script python will run, Parm 1 = Training File to use when building internal memory model. Parm 1 – Test file to use when testing the classification engine. Parm 3 – Number of Epoch to use for this test.

CNNClassifyStock-SLV-1p5up0p3dnMh5.bat – SLV (Silver) Goal to rise by 1.5% before it drops by 0.3% with max hold of 5 days.

python ../data/slv-1p5up-0p3dn-mh10-close.train.csv  ../data/slv-1p5up-0p3dn-mh10-close.test.csv 800

CNNClassifyStock-SPY-2p2Up1p1DnMh6.bat – SPY Goal to rise to exit with profit taker at 2.2% with a 1% stop limit. Max hold of 6 days.

python ../data/slv-1p5up-0p3dn-mh10-close.train.csv  ../data/slv-1p5up-0p3dn-mh10-close.test.csv 800

CNNClassifyStock-SPY-6p0Up1p0DnMh45.bat SPY goal to rise to exit with profit taker at 6% gain with stop loss at 1% and max hold time of 45 days.

python ../data/spy-6up-1dn-mh10-smahigh90.train.csv  ../data/spy-6up-1dn-mh10-smahigh90.test.csv 800

CNNClassifyStock-SPY-8Up4DnMh90.bat SPY Goal to rise to exit with profit taker at 8% gain with stop loss at 5% and maximum hold time of 90 days.

python ../data/spy-8up1-4dn-mh90-close.train.csv  ../data/spy-8up1-4dn-mh90-close.test.csv 800   

CNNClassifyStock-SPY-1p0Up0p5DnMh4.bat SPY Goal to rise to exit with profit taker at 1% gain before it encounters a stop loss of 0.5%. Max hold time 4 days.

CNNClassifyStock-CAT-1p7up1p2dnMh2.bat CAT Goal to rise to exit with profit taker at 1.7% before it encounters a stop loss at 1.2% with max hold of 2 days.

CNNClassifyStock-CAT-6p0up1p0dnMh45.bat CAT Goal to rise to exist with profit taker at 6% before it encounters a stop loss at 1% with max hold of 45 days.

CNNClassifyStock-CAT-7p8up1p2dnMh5.bat CAT Goal to rise 7.8% to exit with profit taker before it encounters a stop loss at 1.2%. Max hold of 5 days.

Deep Learning Tensorflow disclaimer

Deep learning is a broad topic area. Tensorflow is a fairly large and complex product. I have used one configuration of a Tensorflow CNN for these examples. Tensorflow supports many other models and CNN can be configured in many ways including a different initialization and different layer configurations. There are most likely ways to configure Tensorflow to produce better results than I have shown in these samples. If you find them please let me know.


This work is only intended to provide a starting point from which you can easily branch out with your own discovery process. I do sell consulting services you can purchase if you would like to use my expertise to accelerate your own work.

I wrote these examples to test the Quantized classifier and for some of the samples it delivers better results than Tensorflow did. This may be due to selection bias where I used examples that performed well with the Quantized classifier. It may be possible to find a configuration of Tensorflow the would deliver superior results. I personally find that Quantized Classifier and Quantized filter make it easier to find profitable combinations. The Quantized library also seems to provide better support to help discover which features are adding predictive value. Having the engine help guide the feature selection is beneficial when there are millions of possible feature and indicator combinations available but only a small fraction of them will actually help predict future stock prices.

If you would like to build the Quantized classifier into a larger trading system then I can help by providing expertise and consulting services.

Please let me know if you would enjoy similar articles exploring the same examples using the Spark ML libraries or other popular ML libraries.

Thanks Joe Ellsworth. contact

Joe Ellsworth

Quantized classifier Machine Learning in GO

ML Quantized classifier in GO

A general purpose, high performance machine learning classifier. Source Code

Examples /  Tests cover:

  • ASL Sign language Gesture recognition
  • Classify breast cancer from 9 features. 96% accuracy at 100% recall first pass with no optimization. Using 623 training rows.
  • Predict death or survival of Titanic passengers.
  • Predict Diabetes
  • Please send me data sets you would like to add to the test.

Continue reading “Quantized classifier Machine Learning in GO”

Node JS formula parser recursive decent

Available for download at:  BitBucket

Javascript based formula elevator that does not require use of eval.   I wrote this code to test some ideas about how to use function pointers to completely decouple the implementation from class specification.

I used the function pointer style of the command pattern rather than inheritance because I originally intended to use this problem as an experiment in GO and RUST where function pointers are first class citizens.

Once I started playing with ideas in JavaScript the rest just kind of fell out. I don’t like the recursive descent mechanism in JavaScript due to the stack overhead but it would be great for languages that do a better job of tail recursion optimization like Scala.

It works but I would modify to remove recursion before heavier use. Other TODO are in the Readme file at bitbucket.

Continue reading “Node JS formula parser recursive decent”

Mentor low Latency C or C++ code

The original version of this article was published on Linked In where it has remained consistently popular.  This version is updated with additional content.

I do offer consulting services contact

I received a request to mentor low latency C/C++ code from several readers as a result of my last blog post “C++ faster than C# depends on the coding idom”    I love to mentor but thought it would help a larger group if I shared the some of the basic rules in a separate blog post.   I use these techniques for our Stock and Forex trading prediction engines but they are equally useful in most domains.   This is a huge topic area easily large enough to fill several books so please forgive items  I missed.
Continue reading “Mentor low Latency C or C++ code”

DEM – Digital Elevation Model work with Scala

n51e001-london-ukDigital Elevation Models are the height maps of the Earth taken by Satellites.  This data was produced using the Shuttle Radar Topography Mission (SRTM)

The STRM3 data is available for most of the world while the SRTM1 data is available for most of the USA.   The SRTM1 data provides finer resolution than the SRTM3.

The basic quick start and topography docs are OK but working code is always easier to use.

I have included Sample Scala to download the entire world wide set of SRTM3 files along with the SRTM1 files for the USA.  There is also sample Scala code to read the .hgt files and convert them into grey scale renderings.  Continue reading “DEM – Digital Elevation Model work with Scala”

Scala fetch stock bar data from Yahoo finance service

Scala Example to retrieve yahoo stock bar data from Yahoo finance service.  Saves  resulting BAR data in local CSV file. Fetches each year from server and saves in file name where yy is replaced
with the year. The .day indicates this is a day level bar. The Yahoo service does not offer anything smaller than day. Continue reading “Scala fetch stock bar data from Yahoo finance service”

Ideas for Automating Naked Trading using ML Techniques

Walters Naked Trading strategy is centered around a concept of investor fatigue and how to detect when the current trend is exhausted  and is likely to reverse. I have not tested the strategy  but I think it provides a nice framework for a well bounded ML problem that could be useful for the Apply Machine Learning for Investing meetup group.

I watched this presentation during my mandatory exercise time yesterday. It seems like a well bounded but non-trivial way get started testing various ML ideas. I added my ideas about how to think about automating for ML below. Continue reading “Ideas for Automating Naked Trading using ML Techniques”

LuaJIT Access 20 Gig or More of Memory

Access 20 Gig or more from LuaJIT while coding in native Lua  and minimizing GC speed penalties.

I started using LuaJIT© after first using F#,  Python, Julia and C  for stock and Forex related predictive work.     I  am always on the lookout for a language that is high speed as close as I can get to C without having to write in low level C all the time.

Lua is a language that feels somewhat like a cross between BASIC and Ruby and has been around for a long time.   Lua may embedded or used stand-alone.  It has been embedded into many games,  entertainment consoles and other devices as a scripting language.    The LuaJIT  is a new  compiler technology and takes what was already fast  as an interpreted language and in some of our tests made it run over 20X faster with a few tests reaching 80X faster.

LuaJIT seemed like the ideal combination since it provided a language any ruby or python programmer would find readable  with fast start-up times,  excellent run-time speeds and good error messages.   Continue reading “LuaJIT Access 20 Gig or More of Memory”

Sourcecode to save Forex Bar data to CSV via cAlgo bot

I needed this utility  to save 1 minute bar data tick data in a form where our AI / ML prediction engine could access it.     I needed the history but even more important  I needed the  something that could deliver new ticks to the prediction engine fast as they became available.

I cheated with this utility  and open the file in shared mode.  This allows my Lua code read new lines as they become available as if the data was delivered by a more traditional pipe. I tested the lua code which reads to the end of the file and then recognizes when new  data is available simply by continuing the read process.   When no data is available it returns immediately so you can do a very low cost poll without having to re-open or re-read any part of the file.   When I move to linux I can replace the file with a named pipe and eliminate the overhead of writing to disk.

Not a elegant architectural approach but it does keep the interface between my proprietary code and the 3rd party cAlgo product as thin as possible so I can change brokers easily in the future.

I was able to get it to deliver 1 minute bar data back to Jan-2011 which is the best source I have found for that much data.   Still has a flaw in that I could not figure out how to calculate actual transaction volume for the bar but will comeback to that when it is critical. .

Contact us

Sourcecode to Download Forex Tick data to CSV via cAlgo

I needed this to save tick data in a form where our AI / ML prediction engine could access it.     I needed the history but even more important  I needed the  something that could deliver new ticks to the prediction engine fast as they became available.    I cheated with this utility  and open the file in shared mode.  This allows my Lua code read new lines as they become available as if the file were a more traditional pipe.   Not perfect but I wanted to keep the code in cAlgo as small as possible so I can change brokers easily in the future.  It only delivers about a year of back tick data but that is still over a gig of data.

A version of this code is used in our Production Forex Trading by our Forex Prediction Engine.

Contact us

Lua jit tests faster than Julia for Stock Prediction Engine

Lua jit tests faster than Julia for Stock Prediction Engine

I started testing Julia as a possible alternative because Julia advocates claimed the interpreter loop was nearly as fast a C and it was similar in concept to Python which I love but which was too slow for our application.   I recently ran across a blog entry mentioning a new Lua Jit. I found it intriguing because Lua did quite well during our last round of tests.

Performance comparison Julia versus Lua Jit

Relative Execution Time. Lua Jit as baseline – lower is better  
Operation Lua52 LuaJit Julia Julia Typed Array
Parse File into Data Frame 2.42 1.0 5.64 5.64
Compute SMA(14) 2.81 1.0 6.87 0.70
Compute SMA(600) 33.32 1.0 80.00 1.30
Compute SMA_slice(14) 2.42 1.0 11.87 1.83
Compute SMA_slice(600) 33.32 1.0 15.52 5.90
Did not implement slice in Lua so re-used the timing from nested loop version.
Response times are in seconds.

Only 1 tested Julia operation was faster than Lua JIT

The only function where Julia out performed Lua Jit was in the SMA(14) all other items tested were slower.   I think the reason it did better in this instance is that the SMA function must allocate a new array with 71K rows to store the results. In Julia you can do this as a typed Array of float.   In Lua this is done as an append to list so it is allocating memory in little pieces. In the SMA(600) the Lua jit was faster again because it is doing more work compute in a tight loop relative to the memory allocation overhead.

Continue reading “Lua jit tests faster than Julia for Stock Prediction Engine”

Testing Julia for Predictive Engines, An early opinion

The programming language chosen for predictive trading systems depends on where you are in life cycle. If you want to produce code to validate ideas then the fewest lines of code to produce a working system tends to favor languages like F#, Python and Julia. Even for early ideas you sometimes have to deal with considerable data volumes so you still need reasonable speed.     If you are coding for live operations where speed is essential then C/C++ are the best choice especially when unplanned pauses from Garbage collectors are unacceptable.    Julia is a new language designed for speed in critical areas.  It isn’t perfect but overall I am impressed.
Continue reading “Testing Julia for Predictive Engines, An early opinion”

KNN and Ensemble for stock price prediction

Applying KNN to stock price prediction

© Jan-25-2014 Joseph Ellsworth All Rights Reserved   phone-number-for-bayes-analytic

How to apply KNN (K-nearest_neighbors) algorithms to predict future stock price movements.  How to combine KNN,  score normalization and  and Ensemble techniques for trading predictions.

See my newer work for a Quantized Classifier.    It is much faster than KNN and can deliver comparable or better results in many situations.   The code is freely available on BitBucket and I released it under MIT license so it is safe to use in commercial projects.   I hope you will buy our consulting services to help solve your problems but feel free to use the code anyway.    There is also free code showing how to read and classify machine learning CSV files using Deep Learning from TensorFlow.    Continue reading “KNN and Ensemble for stock price prediction”

Why Language Performance Matters & Some Measurements

An optimization pass that takes 10 minutes in one language could take 15 hours in the slower language.

I work from the philosophy that I want the highest performance I can afford but this is traded off against development costs and delivery time because a great solution 6 months later  may be less valuable than a good enough solution 6 months sooner.

Continue reading “Why Language Performance Matters & Some Measurements”

Bayes Analytic Engine is really Bayes+ Morphed for prediction at high volumes

I started out to building a simple Bayes classification engine what the industry would normally consider a naive Bayes classifier.    In the process of testing and enhancing for stock trading I needed the same world view but very different engineering.     In the process it evolved in different directions with many changes away from the naive classifier. Continue reading “Bayes Analytic Engine is really Bayes+ Morphed for prediction at high volumes”

Stock recommendations engine with Machine Learning and AI based optimization

Introduction to Bayes analytic Stock recommendations engine

The stock recommendations engine delivers maximum stock trading profit by finding the subset of current buying opportunities which offer the highest probability of producing the desired profit.    It combines hundreds of statistical measurements with proprietary trading algorithms to deliver a high degree of accuracy and trading profits.     Advanced trading algorithms  allow the system to adjust trading recommendations throughout the day to adjust for changing market conditions. Continue reading “Stock recommendations engine with Machine Learning and AI based optimization”

Bayes Analytic Retail Marketing with Machine Learning

The Bayes Analytic engine is a  machine learning classification engine designed to predict which stock offers a have the highest probability of success.  This can be applied to  price movement or customer behavior.     The engine can be of benefit to organizations that maintain long-term relationships with their customers.  It can ensure more customers receive offers for products they will find valuable which can increase the sales per contact and cause customers pay more attention to future communications. Continue reading “Bayes Analytic Retail Marketing with Machine Learning”

Why not use existing trading engines, R or other analytic tools

Why did you need to build the Bayes Analytic engine when so many analytic engines are available on the market?

The simple answer is that I has specific requirements and could not meet those requirements with the software I had available.     The business answer is to provide unique competitive advantage to the hedge funds we partner with. Continue reading “Why not use existing trading engines, R or other analytic tools”