LLMs, algo trading and the promise of an algo-trading co-pilot

ยท 5 mins read min read

I am writing this post as it may help other who are also exploring how to use LLMs for algotrading and whether they can indeed be effective in algo trading. Caveat with i'm not really a trader, and frankly i'm not really a software engineer. But I can code. And that's pretty dangerous. But fun.

What I was trying to do

The concept I had in my head, wasn't the finGPT stuff, where you use a simple wrapper to get the chatgpt API to return stock ideas. I wanted to take advantage of the ability that LLMs have in coding (particularly openai, gemini, sonnet and deepcode - all of which I tried for this project). I wanted to see, if put in a loop, an LLM could effectively construct a perfect algo for a trade idea. Short answer: it probably can't. Long answer: it's complicated.

There's been some work on this and a lot of papers, which I can dig up if anyone's interested, this was a fun read (and prescient): https://medium.com/@brett_17026/llms-and-algorithmic-trading-0df34383187b

There are a number of challenges, on top of the usual of getting data, cleaning it, backtesting etc.

I wanted to hand off as much as possible to pre-existing frameworks. I tried to develop my own backtesting framework, realised i'm only going to mess that up, so found Vector BT Pro, much to my joy.

This is what it looked like:

A lot going on here, and i'll try to explain! In my boredom I asked it to style the frontend to look like Bloomberg and it obliged!

A lot going on here, and i'll try to explain! In my boredom I asked it to style the frontend to look like Bloomberg and it obliged!


The concept was this:

1. Create a universe of assets

2. Get the data

3. Store a strategy/idea

4. Feed that to the LLM to construct a trading thesis

5. Based on that thesis get the LLM to generate an algorithim (broadly blind to the underlying data)

6. Backtest the algo

7. Feed the results (without lookahead bias etc.)

8. Go back round the loop for as long as it took.

The stack for those who are was this:

- Laravel (TALL) for UX/dashboard and backend

- MySQL/Redis for the caching

- Tons of financials APIs for different data (Alpaca, Financialmodellingprep, AlphaVantage, Polygon)

- I used Claude directly (and AI Studio) extensively to help with coding the site up, the productivity gains are really remarkable

So what happened?

It worked. Kind of.

I did a couple of things that made it interesting. I fed in all 200 ta-lib indicators and made that available to the LLM. I fed in the general structure of how the algo should be formatted. There was a lot in this:

- dynamic creation of code that could work one-shot for trading without breaking the backtest

- live use of indicators, in other word the ability for it to a) create it's own indicators or signals where it wanted (iffy!) and b) use available indicators/signals as it wished (it set the parameters)

- control entry/exits

I then injected that into the Laravel queue. This allowed the LLM to run two separate processes, one where it's controlling the generation of the algos and a second where it's running the backtests.

So in short, there were two options, iterate and auto-iterate. I added an additional feature to allow for additional human input at the iteration stage, to help explain to the LLM the direction I occasionally wanted it to go.


Once the thesis was constructed by the LLM. It would trigger a job to create the algo. This would then trigger a script that used VBT Pro with all the relevant parameters, and we'd get the results of the backtest:


VBT Pro would also give me all the trades, so we had a log of the test:


But more importantly, at algo generation time I got the LLM to do two things as part of it's CoT (chain of thought). I got it to return a critique of the previous LLM and the algo and also offer some thinking of future algo development. This is where human guidance helped in making sure it didn't go wildly off piste, which it often did.


By setting it to auto-iterate, I could sit back and watch it bounce back and forth ideating as to what could work. There were various challenges to ensure look ahead bias didn't feature, and I decided for this initial play around to not feed back detailed data, that was avoid overfitting if that was a risk. In short, it was trying to work from a thesis and generate something algorithmically that could then fit that thesis.

Did it work?

a) Obviously not well enough otherwise i'd be running the world's largest hedge fund, i'm not

b) Did it achieve what I set out to, yes it did.

Here are the learnings:

1. LLMs can create effective code for algo trading one-shot with the right guardrails

2. You can construct an entire framework to backtest etc. with the help of LLMs, making complex products like VBT Pro hyper-accessible

3. Letting an LLM run on it's own without sufficient guidance is dangerous and ineffective, letting agents fly on their own (other projects have confirmed this too) without sufficient constraint will lead to weak results

4. Co-piloting, more human intervention will likely get you there faster

5. It's thesis generation wasn't creative unless I pushed it to be creative, it will generate plausible crap (we know that), but in this environment that's really unhelpful

6. You're unlikely to get an LLM to find you alpha, will it refine your thinking the way mainstream chatbots do, almost certainly, but if you're expecting it to spot patterns for fun, it just won't do it

7. It's easy to get limited by legacy thinking. I started by generating 1 algo at a time and then auto-iterating. And then I realised that this is a nonsense limitation. I can get the LLM to create multiple (10s, 100s etc.) of different variations of it's thesis (note this isn't ML, so it's still trying to fit to it's thesis and explaining why it's taking each step) and then choose the winners (based on a scoring system, or a further CoT). This meant for negligible cost I could run 100s of iterations.

8. LLMs are still slow.

9. LLMs are cheap.

10. I've not given up, this was an enormously fun adventure into things that were unimaginable just a year or two ago. It's putting an enormous amount of power back into the hands of retail traders, but it's easy to get 80% of the way there, 100% more difficult.

I'm not blind to the stupidity and folly of all of this. Putting the algos live required more confidence, but not much more confidence. As I said, i'm not a seasoned trader, which set me back and I did this mainly as an interesting concept to try. There will be a lot of criticism of so many of my approaches, but whatever, there's a lot to play with here. I am aware of ML being the key focus in trading, but this wasn't about HFT, it was about seeing whether in a more subjective way an LLM could laterally translate an idea into a trading algorithm and then recursively improve.

I hope it's helpful to others playing with LLMs and algotrading, and if I can help/answer questions am happy to.

More to Read

How much does prompting matter with LLMs?

The short answer: it does and it doesn't.

AI, ChatGPT and the death of thin SaaS

One of the most interest economic quirks of the rise of generative AI and coding tools in particular...