Using quantum generative adversarial networks for portfolio analysis

Portfolio Analysis

In 1952, Markovitz proposed a novel theory that exploits diversification to engineer a portfolio with higher returns for lower risk, going against conventional beliefs that risk has a positive linear relationship to reward. Mean variance portfolio optimization is an analysis technique that was born from Markovitz’s work. What Markovitz added to the game was the consideration of variance and covariance between individual stocks. The idea is to maximize return by combining assets that are anti-correlated, thus resulting in a lower volatility for the portfolio.

GAN Method Overview

The use of classical GANs for portfolio analysis is described in [1]. The essence of our method is that we will end up with a model that takes in a set of data defining the stock prices of several stocks over a defined number of previous days. The model will then output it’s predicted stock data for the next period of days. We train this using a second network, a discriminator whose goal is to determine if the data (both the previous time period and the following time period) is generated or real. The generator is then trained so that the discriminator can not tell the difference between the real and fake data. These two networks are trained in turn with large quantities of data. This results in a generator that produces data that the discriminator cannot decipher if it has been generated or real. In the following diagram the training data M is split into the previous data Mb and the future data Mf. Mb can then be input into the generator, and with some noise latent vector a generated sample Mf^ can be produced. These two datasets M (Mb+Mf) and M’ (Mb+Mf^) can then be used in the training methods.

Image taken from [1]

Data

The first step is to prepare the training data. Historical stock data can be easily downloaded from the internet, in this case we have taken data from Yahoo Finance. The data is then split into sequences of the previous and future days, ready for training. The data is also normalised and converted into percentage change in order for the networks to more easily handle it. For our implementation, due to the time and computing power limitations described in more detail later, we have taken 100 sequences of TSLA data.

Models

Ideally both the discriminator and the generator would be quantum. However the limitations on the number of qubits (32) means that compressing the data such that these two would work alongside each other would cause main features to be lost (this was tested). In our example implementation we have therefore used a classical discriminator and a quantum generator, allowing all qubits to be used as input for the generator. For the discriminator we have used a convolutional neural network as this has been shown to handle time series data well. For the generator we have used the quantum implementation below, taken from [2].

Image taken from [2]

Training

Due to time limitations, we were only able to run one and a half epochs for training the generator (where each epoch cycles through and trains the generator on each of the 100 sequences), and only train the discriminator once prior.

Blue = Validation Data, Orange = Training Data
Result Sample After 1 Epoch with limited data

Devices

To run these models we used an array of simulators and devices:

  1. FLOQ — Having won access to Google’s in development quantum simulator it became incredibly useful in our training process. The simulator significantly sped up the process of creating the code and debugging it so that time on the real devices was best spent. FLOQ is optimised for 32 qubits which is ideal for our circuit, and sped up the circuit run time from O(500 seconds) on SV1 to O(50 seconds). This allowed our preliminary testing to be as quick as possible.
  2. SV1 — Amazon Braket’s SV1 simulator proved useful in testing the optimisation of the circuit parameters, as the parallel feature meant that multiple circuits could run in parallel, this was used to finalise our testing of the model.
  3. Rigetti — Having won the $4000 AWS credits power up, we could run our final model through Amazon Braket on the Rigetti machine.

Limitations

For this project we had the following limitations imposed upon us:

  1. The devices available imposed a maximum of 32 qubits.
  2. The run time of the training could not be longer than the period Rigetti was open for.

Using the Model for Portfolio Analysis

As the data our model produced is not complete, for the portfolio analysis implementation using QAOA we take raw historical data to compare the quantum methods against classical benchmarks. If we had the model described above the data used in this would be replaced with the predictions made by the generator, allowing for future prices to be accounted for in the portfolio model. Furthermore the accuracy of the predictions can be included and a combination of past data and future predictions, weighted according to this risk of the predictions being incorrect can be incorporated into solving the portfolio optimisation problem.

Mathematical Description of Portfolio Analysis

The mean-variance portfolio optimization problem is a NP-Hard COP, making it computationally expensive for current classical computers to solve. Mathematically, we can represent this quadratic problem for n assets as:

Mathematical Description
Definitions
Equality Constraint
Penalty Term

QAOA and VQE Implementations

The problem described in the previous section can be mapped into a Hamiltonian whose ground state corresponds to the optimal solution for the mean-variance portfolio problem. An implemetation of the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA) can then be used to find the optimal solution for a given set of equities [4]. Due to time constraints we took some pre defined functions from Qiskit Aqua [5] and used them to implement the solution in PennyLane. In our implementation we have also further backtested each approach against three random stock portfoliios from the S&P 500. The results of this are shown below (showing the growth of the portfolios over time).

Trial 1
Trial 2
Trial 3

Tensor Networks

There are some further methods that had we had more time we could implement to improve our model. The methods described in [3] can be applied to the qGAN. These improvements used a tensor network based generative model. As tensor networks are adaptive, dimensions of tensor indices internal to the network grow and shrink during training to concentrate resources on the particular correlations within the data. In our case these are extremely important as we’re talking about different portfolios. The workflow planned for this was constructing the MPS first and then optimising it through a DMRG algorithm. All initial cost function evaluations were to be decided by our qGAN, and is constructed just to give support to the target probability distribution the MPS model was aiming to capture. MPS would have significantly decreased the time complexity of calculating the cost function.

References

[1] PAGAN: Portfolio Analysis with Generative Adversarial Networks

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store