Category Archives: Complete Solutions

Can the Recursive Gaussian Channel Beat the Battle Tested Bollinger Band?

Bridging 19th‑century mathematics and 21st‑century trading methods

A client sent me what looked like a simple indicator written in TradingView’s Pine Script—though I didn’t realize it was Pine at first—and asked if I could port it to EasyLanguage (or PowerLanguage for MultiCharts). If you Google “Gaussian Channel Donovan Wall TradingView,” you’ll find the original code. Pine Script isn’t exactly newcomer-friendly; it’s fine once you get the feel for it, but I’m spoiled by EasyLanguage, which —at least to my eye—reads almost like plain English. (Others may beg to differ!) Below is a brief Pine snippet; to this humble EL devotee, it’s more hieroglyphics than prose.

    _m2 := _i == 9 ? 36  : _i == 8 ? 28 : _i == 7 ? 21 : _i == 6 ? 15 : _i == 5 ? 10 : _i == 4 ? 6 : _i == 3 ? 3 : _i == 2 ? 1 : 0
_m3 := _i == 9 ? 84 : _i == 8 ? 56 : _i == 7 ? 35 : _i == 6 ? 20 : _i == 5 ? 10 : _i == 4 ? 4 : _i == 3 ? 1 : 0
_m4 := _i == 9 ? 126 : _i == 8 ? 70 : _i == 7 ? 35 : _i == 6 ? 15 : _i == 5 ? 5 : _i == 4 ? 1 : 0
_m5 := _i == 9 ? 126 : _i == 8 ? 56 : _i == 7 ? 21 : _i == 6 ? 6 : _i == 5 ? 1 : 0
_m6 := _i == 9 ? 84 : _i == 8 ? 28 : _i == 7 ? 7 : _i == 6 ? 1 : 0
_m7 := _i == 9 ? 36 : _i == 8 ? 8 : _i == 7 ? 1 : 0
_m8 := _i == 9 ? 9 : _i == 8 ? 1 : 0
_m9 := _i == 9 ? 1 : 0

I could see right away that the code was doing some kind of coefficient “lookup,” so I ran it through ChatGPT to get a quick explanation. The model suggested it was building weights from Pascal’s Triangle. A bit later the client sent me the original TradingView post, which confirmed the script was using John Ehlers’s Gaussian filter to build a channel—similar in spirit to Keltner or Bollinger bands.

Once Ehlers’s name popped up, the next stop was his resource-rich site (mesasoftware.com/TechnicalArticles) for the theory behind the filter. I also searched for a ready-made EasyLanguage version but came up empty. With ChatGPT’s help I decided to roll my own; after all, knocking out support code like this is exactly what these AI tools are for.

What do Carl Friedrich Gauss, Blaise Pascal, and the markets have in common.

You’ve probably bumped into the bell curve in school—maybe in a stats class, maybe when teachers “graded on a curve.” Mathematicians call it by a few interchangeable names:

  • Normal distribution (stats class)
  • Gaussian curve (named after Carl Friedrich Gauss)
  • Binomial curve (because it pops out of Pascal’s Triangle)

No matter the label, it’s the same smooth hump that says, “most values cluster in the middle, very few at the extremes.” Gauss formalized the formula, Pascal’s Triangle supplies the ready‑made integer weights, and traders borrow both ideas to build filters that tame noisy price charts.

Big picture: Gauss gives us the shape of the curve, Pascal gives us the exact numbers to approximate it, and that combo lets us create a market indicator that reacts quickly and stays smooth.

How does this help build an indicator?

The word channel is in the name of the indicator, so it was highly likely we are dealing with a smoothed price with an upper and lower band a certain distance from the smoothed price.  If you feed this into Chat GPT and ask for it in EasyLanguage, it will create an indicator using a bunch of arrays.  See Chat GPT isn’t 100% knowledgeable of EasyLanguage like it is with python.  It didn’t understand the concept of EasyLanguage’s serialized variables.  You know where you can refer to a prior value of a variable – myValue[1] or myValue[2].  Chat tries to replicate this with the usage of arrays which gets you into a bunch of trouble right off the bat.  Let’s discuss this a little later.

The Mechanics of smoothing price with Pascal’s Triangle, or Gaussian Kernal or Binomial Coefficients.

(a + b)^2 = a^2 + 2ab + b^2 → coefficients 1  2  1

(a + b)^3 = a^3 + 3a^2b + 3ab^2 + b^3 → coefficients 1 3 3 1

(a + b)^4 = a^4 + 4a^3b + 6a^2b^2 + 4ab^3 + b^4 → coefficients 1 4 6 4 1

(a + b)^5 → coefficients 1 5 10 10 5 1

(a + b)^6 → coefficients 1 6 15 20 15 6 1

(a + b)^7 → coefficients 1 7 21 35 35 21 7 1

(a + b)^8 → coefficients 1 8 28 56 70 56 28 8 1

(a + b)^9 → coefficients 1 9 36 84 126 126 84 36 9 1
Binomial Coefficients

Stack those rows, keep going, and you build Pascal’s Triangle—each number is the sum of the two numbers just above it.

Look at the 7th row of Pascal’s Triangle:

1  6  15  20  15  6  1

Normalize those numbers (divide by their sum), and you obtain a discrete approximation of a Gaussian kernel.  Big Deal, right?  You don’t need to know the math behind this, just know that each row in Pascal’s triangle is symmetric.  Each row starts are one and ends at one.  You can use these coefficients to weight each value across a period of time.  Do you mean all this math stuff is akin to a weighted moving average.

Idea Weighted Moving Average Binomial / Gaussian weights Why they feel similar
What it does Averages recent prices, but gives newer bars bigger weights (e.g., 1-2-3-4). Averages recent prices using the numbers from Pascal’s Triangle (e.g., 1-4-6-4-1). Both are just weighted sums of past prices.
Shape of the weights Forms a triangle – rises steadily to the newest bar, then drops to zero beyond the window. Forms a bell – climbs to the centre, then falls off symmetrically. Triangles and bells are both peaked shapes: the middle matters most, the edges least.
Normalizing step Divide by the sum of the weights (e.g., 1+2+3+4 = 10) so they add to 1. Same: divide by 1+4+6+4+1 = 16 so they add to 1. After normalizing, each is just a fancy way to say “take a percentage of each bar and add them up.”
Smoothing power Good at knocking out single-bar noise, but the straight sides of the triangle let more mid-frequency wiggles through. Slightly better at suppressing both very fast and mid-speed wiggles, so the line looks cleaner. Both cut random jitter while trying not to lag too far behind real turns.
Math connection A single pass of linear weights. What you get if you apply a two-point moving average over and over again (each pass builds the next Pascal row). Re-applying a simple WMA repeatedly evolves into the binomial weights – that’s the family link.

Which comes first the indicator or the function that feeds the indicator?

If you are working with code and especially with ChatGPT or any other LLM you need a medium where you can quickly program and observe results.  The indicator analysis module will give you instant results. and this is where you should start.  However, if you look at the TradingView code of the Gaussian Channel you will notice that the smoothing function is called twice, once for the close and once for the true range on each bar.  In other words, you are using the same code twice and incorporating this without functions would be redundant.  In my first attempt, I created the smoothing function and named it Binomial, and the channels were a magnitude of 10 below the current price.  So, all the price bars were scrunched at the very top of the chart.  At first if you don’t succeed, try and try and try and try again.

At first ChatGPT kept insisting on arrays because it didn’t realize EasyLanguage can reference earlier bars just by tagging a variable with [n]. EasyLanguage conveniently hides that bookkeeping, but you have to tell the model so it stops reinventing circular buffers. Once I explained that a local variable—say filt—already remembers its prior values (filt[1], filt[2], etc.), the conversation moved forward.

The next hurdle was clarifying that Donovan’s script feeds raw data (Close and TrueRange) into every stage, not the output of the previous stage. ChatGPT was trying to build a true cascade—each pole using the prior pole’s result—whereas Donovan calculates each pole completely independently. After I pointed that out, the model rewrote the logic correctly and even walked me through the difference:

  1. Cascaded filter → Pole 2 uses Pole 1’s output, Pole 3 uses Pole 2’s, and so on.

  2. Independent poles → Every pole starts over with the raw Close and Range.

That explanation finally squared the circle and let me produce an EasyLanguage version that matches the original TradingView indicator.

“Cascade” = one stage feeding the next

Think of a cascade as a relay race:

  1. Stage 1 (“Pole 1”) takes the raw price, smooths it a little, and hands the baton to …

  2. Stage 2 (“Pole 2”), which smooths the output of stage 1 a bit more, then passes to …

  3. Stage 3, and so on.

After 4-, 6-, or 9-hand-offs the combined shape of all those little smooths matches the full Gaussian bell.


The indicator lets you pick anywhere from two to nine poles to do the heavy lifting on the data-smoothing. And no, we’re not talking about the North and South Poles—or the kind you cast a fishing line from.

So, what is a pole?

In filter speak, a pole is one little “memory stage” inside the math that reaches back to yesterday’s value (or last bar’s value) before deciding today’s output. Stack more poles and you stack more of those memory stages:

  • 1 pole → basically a quick-and-dirty exponential average.

  • 4 poles → four mini-averages chained together; much smoother, a hair more lag.

  • 9 poles → nine stages deep; super-silky curve, but you’ll feel the delay.

Think of each pole as a coffee filter. One filter catches the big grounds, two filters catch the sludge, and by the time you’ve got nine stacked up, you’re practically drinking distilled water. Same beans in, different smoothness out.

You can dial in two extra tweaks:

  • Lag compensation – Tell the code to look one step ahead by swapping in a one-bar forecast of price for the raw price. That little nudge pulls the channel forward so it doesn’t trail the market.
  • Extra smoothing – Want the line even silkier? Flip the switch and the function just averages the most-recent two filter values. It’s a tiny moving average—jitter drops a notch, lag creeps up by only half a bar.

For illustrative purposes this is how Pole 6 is calculated.  I also show a mapping scheme to store Pascal’s triangle into arrays.  I put all this code inside a function with the name BinomialFilterN.

{─────────────────────────────────────────────────────────────────────
2. Hard-code every Pascal row (n = 1 … 9)
─────────────────────────────────────────────────────────────────────}
once
begin
{ n = 1 : 1 1 }
m0Map[1] = 1; m1Map[1] = 1;

{ n = 2 : 1 2 1 }
m0Map[2] = 1; m1Map[2] = 2; m2Map[2] = 1;

{ n = 3 : 1 3 3 1 }
m0Map[3] = 1; m1Map[3] = 3; m2Map[3] = 3; m3Map[3] = 1;

{ n = 4 : 1 4 6 4 1 }
m0Map[4] = 1; m1Map[4] = 4; m2Map[4] = 6; m3Map[4] = 4;
m4Map[4] = 1;

{ n = 5 : 1 5 10 10 5 1 }
m0Map[5] = 1; m1Map[5] = 5; m2Map[5] = 10; m3Map[5] = 10;
m4Map[5] = 5; m5Map[5] = 1;

{ n = 6 : 1 6 15 20 15 6 1 }
m0Map[6] = 1; m1Map[6] = 6; m2Map[6] = 15; m3Map[6] = 20;
m4Map[6] = 15; m5Map[6] = 6; m6Map[6] = 1;

{ n = 7 : 1 7 21 35 35 21 7 1 }
m0Map[7] = 1; m1Map[7] = 7; m2Map[7] = 21; m3Map[7] = 35;
m4Map[7] = 35; m5Map[7] = 21; m6Map[7] = 7; m7Map[7] = 1;

{ n = 8 : 1 8 28 56 70 56 28 8 1 }
m0Map[8] = 1; m1Map[8] = 8; m2Map[8] = 28; m3Map[8] = 56;
m4Map[8] = 70; m5Map[8] = 56; m6Map[8] = 28; m7Map[8] = 8;
m8Map[8] = 1;

{ n = 9 : 1 9 36 84 126 126 84 36 9 1 }
m0Map[9] = 1; m1Map[9] = 9; m2Map[9] = 36; m3Map[9] = 84;
m4Map[9] = 126; m5Map[9] = 126; m6Map[9] = 84; m7Map[9] = 36;
m8Map[9] = 9; m9Map[9] = 1;
end;

{─────────────────────────────────────────────────────────────────────
3. Working variables
─────────────────────────────────────────────────────────────────────}
variables:
beta_(0), { = 1 – alpha }
f1(0), f2(0), f3(0), f4(0), f5(0),
f6(0), f7(0), f8(0), f9(0),
f(0);

beta_ = 1 - alpha;

{─────────────────────────────────────────────────────────────────────
4. Initialise memory until we have enough bars
─────────────────────────────────────────────────────────────────────}
if currentBar <= poleCount then
begin
f1 = 0; f2 = 0; f3 = 0; f4 = 0; f5 = 0;
f6 = 0; f7 = 0; f8 = 0; f9 = 0;
end
else
begin
{================== 1-pole ==================}
if poleCount = 1 then
begin
f1 = m0Map[1]*power(alpha,1)*source
+ m1Map[1]*power(beta_,1)*f1[1];
f = f1;
end;

{================== 2-pole ==================}
{================== 3-pole ==================}
{================== 4-pole ==================}
{================== 5-pole ==================}
{================== 6-pole ==================}

if poleCount = 6 then
begin
f6 = m0Map[6]*power(alpha,6)*source
+ m1Map[6]*power(beta_,1)*f6[1]
- m2Map[6]*power(beta_,2)*f6[2]
+ m3Map[6]*power(beta_,3)*f6[3]
- m4Map[6]*power(beta_,4)*f6[4]
+ m5Map[6]*power(beta_,5)*f6[5]
- m6Map[6]*power(beta_,6)*f6[6];
f = f6;
end;
Code showing Pascal's Triangle and 6 pole smoothing

There is redundant code here, but I included it to make it readable for most of my EasyLanguage/PowerLanguage programmers.   The math is very simple when you break it down.  If we choose Pole #6 all we do is:

beta_ = (1 – Cosine(360 / per)) / (Power(1.414, 2 / numPoles) – 1);
alpha = -beta_ + SquareRoot(beta_ * beta_ + 2 * beta);

  1. 1 x alpha^6 x close
  2. plus 6 x beta^1 x prior f6[1]
  3. minus 15 x beta^2 x f6[2]
  4. plus 20 x beta^3 x f6[3]
  5. minus 15 x beta^4 x f6[4]
  6. plus 6 x beta^5 x f6[5]
  7. minus 1 x beta^6 x f6[6]

EasyLanguage’s trig calls expect degrees, while most other languages want radians. That’s why the code feeds Cosine(360 / per)—the 360 converts the cycle length into degrees before taking the cosine.

I also lift the constant √2 (1.414…) by squaring it with Power(1.414, 2)and use the same Power routine for roots—for example, the cube root of x is simply Power(x, 1 / 3).

I placed BinomialFilterN inside a second routine called GaussianChannelFunc—a classic wrapper.

Why bother with the extra layer?

Reason What the wrapper does before/after calling BinomialFilterN
Housekeeping • Converts the user-friendly period (per) into the α required by the core filter.• Optional one-bar “look-ahead” to cancel lag.• Runs the filter twice (price and TrueRange).
Packaging • Builds upper, centre, and lower bands from the two filtered series.• Returns all three numbers through one array argument.
Extensibility Tomorrow you can tweak the channel logic—different volatility measure, ATR multiplier, extra smoothing—without touching the filter math. The heavy-duty code stays in BinomialFilterN; the wrapper simply preps inputs and formats outputs.

Think of it as a coffee machine:

  • BinomialFilterN is the brewing unit—hot water + grounds in, espresso out, and it never changes.
  • GaussianChannelFunc is the barista: grinds the beans, measures the water, adds milk and foam, then hands you the finished latte. If you want vanilla syrup tomorrow, you ask the barista; you don’t redesign the boiler.

By splitting the work this way, each piece stays focused, easier to test, and simple to extend later.

The wrapper has to hand back three numbers—upper band, centre line, and lower band—yet an EasyLanguage function can formally return only one. The standard workaround is to pass the additional outputs by reference:

// upper are caught by the receiving function as type numericRef
// can get unweilding quickly
value1 = GaussianChannelFunc(src, periods, numOfPoles,compLag, smooth, upper, mid, lower);
Code Snippet - Calling the function with three containers for the levels

That works, but the call quickly turns into a mile-long argument list.
Instead, I bundle those three outputs into a tiny array and pass the array’s address once:


array:GaussianChanArray[3](0); // remember we can use [0]

value1 = GaussianChannelFunc(src, periods, numOfPoles, compLag, smooth,GaussianChanArray);

upperChannel = GaussianChanArray[0];
centreLine = GaussianChanArray[1];
lowerChannel = GaussianChanArray[2];
Using a simple array as container for return values

This wasn’t that impressive, but what if your function needed to return five values?

Now onto the indicator and the strategy

From the outside this looks like a quick coding job—but getting here was a series of detours. I let ChatGPT drive and only nudged when it went off-track. Here are the dead-ends we hit before the indicator finally behaved:

  • Pine-script blind spot
    • ChatGPT didn’t recognise TradingView syntax, so its first translation attempts were gibberish.
  • “Mystery math” instead of binomial weights
    • After I mentioned Ehlers and Gaussian smoothing, the model invented a dynamic weighting scheme rather than using the fixed Pascal-triangle numbers the original script relies on.
  • Arrays everywhere
    • It kept insisting on circular buffers because it didn’t realise EasyLanguage variables already remember their own history via [1], [2], etc.
  • Wrong memory reference
    • Even after the array issue was fixed, the code updated each pole with raw price / range instead of the pole’s own prior output.
  • Unwanted filter cascade
    • ChatGPT then tried a true “cascade” (pole 2 fed by pole 1, pole 3 by pole 2). Donovan’s version calculates every pole independently—so we had to unwind that and start over.
  • Sign-flip confusion
    • It forgot the plus/minus pattern that keeps the Gaussian zero-lagged, producing a line that trailed price by several bars.

Each course-correction tightened the spec until the model finally spit out the straight, hard-coded-coefficients version you see now.

After all that was it worth the time and analysis?

  • A stop version where you buy at and sell short at the upper and lower levels worked best.  Liquidating at the midlevel on a stop was also incorporated.
  • Using a large profit objective and a relatively small stop loss seemed to work best.
  • Intermediate period length and utilizing 8 poles produced the best results.

ELD for TradeStation and Multicharts

GAUSSIANSTUDY

Text files of functions, indicator and strategies

GaussianChannelFunc Function

Head to Head with Bollinger Bands

Test results across 22 commodities for the past 25 years.

Gaussian Channel:  Optimizing the period and ATR multiplier with 8 poles:

Simple Bollinger Band: optimizing moving average length and number of standard deviations

Conclusion (fight-card style)

Decision on the first bout:
The Rolling heavy-hitter—Bollinger Bands—lands the cleaner power shots and takes the scorecards in our 22-commodity test.

But don’t call it a knockout just yet.
The Recursive counter-puncher—the Gaussian Channel—fights with an extra weapon: pole count. Adjusting those poles changes how tightly the centre line hugs price, and we’ve only sparred with one setting.

Next round:
Tune the poles, test different time-frames, and pit the fighters on equities and FX. The smarter, jabbing Gaussian might steal the rematch once its footwork is dialed in.

 

EasyLanguage Version Control and Back Up

I Can’t Believe I Just Lost All My Studies!

“How can I not restore it? I back up my files every week!” Have you found yourself in this same predicament before.  Somehow, I’ve lost my code more times than I care to admit. The TradeStation and MultiCharts paradigm of requiring us to store our precious strategies and indicators in a proprietary, non–text format has its advantages, but to me the drawbacks far outweigh any benefits.

  • Pros of a proprietary library: Seamless integration, single-click compile/run, built-in (if limited) version history, encryption, and straightforward workspace management.

  • Cons: Opaque blobs that aren’t easily diffed, harder to back up in granular increments, potential single point of failure, and extra steps when migrating to other tools.

Git is overkill for a single developer of an EasyLanguage Study.

Most programmers who work in a domain‐specific language like EasyLanguage simply don’t “get” Git. If you’re unfamiliar with Git, here’s a quick definition:

Git is the version‐control system created by Linus Torvalds—yes, the same Linus Torvalds who gave us Linux and turned down a huge payday to release it as open source. Git lets multiple developers track changes, revert to earlier versions, and collaborate seamlessly on code without stepping on each other’s toes.

Git records changes to a project by taking snapshots (commits) of its files and storing them in a distributed repository, so developers can branch and merge independently before synchronizing updates. It’s often hard to grasp because the concepts of branching, merging, and distributed workflows differ from linear, centralized versioning models and require a shift in thinking and terminology.

Fun fact: ChatGPT did a back-of-the-envelope calculation suggesting that, had Linus charged for Linux, his net worth could be as high as $50 billion. In reality, he’s a salaried employee at the Linux Foundation with a net worth closer to $10 million—proof that the open-source model can be wildly generous for everyone except the original author.

What is an EasyLanguage programmer to do?

One straightforward (but labor-intensive) method is to copy your EasyLanguage or PowerLanguage code into a plain-text editor like Notepad and save it in a well-named folder—either locally or in the cloud. That gives you a basic text-based backup. If you want to track versions, you can simply take a snapshot every time you make a change and append a version number to the filename (e.g., MyStrategy_v1.0.txt, MyStrategy_v1.1.txt, etc.). For most solo EasyLanguage developers, this ad-hoc versioning is sufficient, since you’re typically the only person editing the code. However, in the unlikelihood that you do collaborate with others, it becomes cumbersome to merge updates or see exactly what changed between versions. In this scenario, learning GIT would be worthwhile.

Because EasyLanguage developers typically work solo (to protect their proprietary code), most don’t bother with Git. And let’s face it—many of us get lazy about backups and versioning. You create a strategy that works, start tweaking it, and before you know it you can’t recall how to revert to the original. Who else has been down that road?

A few years ago, I developed a simple macro with AutoIt where I would hit <CTRL> F9 and all the code in my current editor window was select and copied and saved to a new text file.

Recently I modified the macro to add a version suffix to the filename if the filename already exits.  If you hit save and the lates version is _v002.txt, then the macro will save it as _v003.txt

Back up – Checked!  Version control – Checked!  Anything else?

I do use Git for my multi‐file projects—it’s fantastic for instantly showing me what changed between commits when something breaks. I wish I had that same “see the diff” workflow for my EasyLanguage scripts. Thanks to WinMerge, I actually can: just select two versions of my script, and it highlights every added, removed, or modified line. WinMerge is free to use (they do ask for a small donation if you find it valuable), and now I can conveniently compare any two snapshots of my code—just like I would with Git.

The differences will be highlighted in the document maps on the left side and then also directly in the code.

Take a look at this video to see my workflow.

What good are these tools if you don’t use them?

I tried to make the task of backing up and version control as simple as clicking <ctrl> F9.  Now it is up to you to do it.  I promise the more you do it, the less of hassle it will become, and I can almost guarantee you will thank me in the future.  Trust me – this is as easy as GIT.  However, setting up GIT is not a cakewalk.

Here all you need to do is download the two software and following the instructions in getting the following script compiled to an .exe.  Trust me it is much easier than it looks.  I am providing this information so that I don’t have to provide an .EXE and all of the headaches involved with downloading it.  However, if you are cool with downloading an .EXE, then shoot me an email and I will provide a link.

In few days I will publish some results of the work of my “Snap-Back” strategy.

This is the AutoIt script you will need to copy after you download AutoIt.  Don’t worry you don’t need to understand it.  After the code listing, I give step by step instructions on how to turn the script into an executable.

#include <Clipboard.au3>
#include <File.au3>
#include <MsgBoxConstants.au3>
#include <StringConstants.au3>

; Set hotkeys:
HotKeySet("^{F9}", "CaptureActiveWindow") ; F9 → capture text
HotKeySet("^{F12}", "TerminateScript") ; F12 → exit script

; Keep the script running
While 1
Sleep(100)
WEnd

Func CaptureActiveWindow()
; 1) Get full active window title
Local $activeTitle = WinGetTitle("[ACTIVE]")

; 2) Extract just the tab name (text after the last " - ")
Local $tabName = StringRegExpReplace($activeTitle, '^.* - (.+)$', '\1')
If $tabName = $activeTitle Then
; no dash found, use full title
$tabName = $activeTitle
EndIf

; 3) Remove illegal filename characters
Local $cleanTitle = StringRegExpReplace($tabName, '[\\\/:\*\?"<>\|]', "")
If $cleanTitle = "" Then $cleanTitle = "CapturedText"

; 4) Build default filename
Local $defaultFileName = $cleanTitle & ".txt"

; 5) Activate window and copy its contents
WinActivate($activeTitle)
Sleep(200)
Send("^a") ; Select all
Sleep(200)
Send("^c") ; Copy
Sleep(200)

; 6) Retrieve clipboard text
Local $text = _ClipBoard_GetData()
If @error Then
MsgBox($MB_ICONERROR, "Error", "Failed to get clipboard data.")
Return
EndIf

; 7) Ask user where to save, defaulting to our cleaned tab name
Local $savePath = FileSaveDialog( _
"Save Captured Text", _
@ScriptDir, _
"Text Files (*.txt)", _
2, _
$defaultFileName _
)
If $savePath = "" Then
MsgBox($MB_ICONINFORMATION, "Cancelled", "Save operation cancelled.")
Return
EndIf

; 8) Simple version control: if the file exists, append _v001, _v002, ...
Local $base = StringTrimRight($savePath, 4) ; remove .txt
Local $ext = ".txt"
Local $i = 1
Local $dest = $savePath
While FileExists($dest)
$dest = $base & "_v" & StringFormat("%03d", $i) & $ext
$i += 1
WEnd

; 9) Write the text to the versioned filename
Local $fileHandle = FileOpen($dest, 2) ; write mode
If $fileHandle = -1 Then
MsgBox($MB_ICONERROR, "Error", "Failed to open file for writing.")
Return
EndIf
FileWrite($fileHandle, $text)
FileClose($fileHandle)

MsgBox($MB_ICONINFORMATION, "Success", "Text saved successfully to:" & @CRLF & $dest)
EndFunc

Func TerminateScript()
Exit 0
EndFunc
Auto It Script - Set it and Forget it

Step 1: Download and Install AutoIt (plus SciTE-Lite)

  1. Go to the AutoIt website: https://www.autoitscript.com/site/autoit/downloads/
  2. Under “AutoIt Full Installation,” click Download.
  3. Run the downloaded installer (AutoIt3.exe) and follow the prompts:
  • Accept the license agreement.
  • Leave all default components checked (this installs both AutoIt and SciTE-Lite).
  • Finish the installation.

After this, you’ll have:

  • AutoIt (the compiler/interpreter) in your Program Files.
  • SciTE-Lite (a lightweight code editor preconfigured for AutoIt) installed, usually at
 C:\Program Files (x86)\AutoIt3\SciTE\

Step 2: Open SciTE-Lite and Create a New Script

  1. Launch SciTE-Lite:
  • Windows Start Menu → All Programs → AutoIt v3 → SciTE-Lite (AutoIt)
  • Or double-click the SciTE-Lite shortcut if one was placed on your desktop.
  • In SciTE-Lite, go to File → New (or press Ctrl+N). You’ll see a blank editor window.

Step 3: Copy Your Script Code from the Website

  1. Select all of the code (click inside the code block above, then press Ctrl+A) and copy (Ctrl+C).
  2. Return to the blank SciTE-Lite window and paste (Ctrl+V) the code into it.

Step 4: Save the Script as EzLangToText.au3

  1. In SciTE-Lite, choose File → Save As… (or press Ctrl+Shift+S).
  2. In the “Save As” dialog:
  • Navigate to Documents\AutoIt Scripts\ if it exists or stay in the default folder.
  • For “File name,” type:
  • EzLangToText.au3
  • Ensure “Save as type” is set to AutoIt v3 Source (*.au3).
  • Click Save.

Now SciTE-Lite knows this is an AutoIt script.


Step 5: Compile the Script to an .exe

  1. Make sure EzLangToText.au3 is the active tab in SciTE-Lite.

  2. Press F7 (or go to Tools → Compile).

  • SciTE-Lite runs AutoIt’s compiler (Aut2Exe) behind the scenes.
  • In the output pane at the bottom, you’ll see messages like “Compiling…” and finally “Compiled successfully.”
  • When the compile finishes, you’ll find EzLangToText.exe in the same folder as your .au3 file.

Step 6: Run the Resulting EXE

  • You can now double-click EzLangToText.exe to run it on any Windows PC (no AutoIt installation needed)


Some EasyLanguage Functions Are Really “Classy”

The Series Function is very special

When a function accesses a previously stored value of one of its own variables, it essentially becomes a “series” function. Few programming languages offer this convenience out of the box. The ability to automatically remember a function’s internal state from one bar (or time step) to the next is known as state retention. In languages like Python, achieving this typically requires using a class structure. In Module #2 of my Easing into EasyLanguage Academy, I explained why EasyLanguage isn’t as simple as its name implies—yet this feature shows how its developers aimed to simplify otherwise complex tasks.

If you’ve ever worked with functions with memory that exchange data using a numericRef input, you’ve essentially been using a pseudo-class—a form of object-oriented programming in its own right. EasyLanguage includes a robust object-oriented library (with excellent resources by Sunny Harris and Sam Tennis – buy their book on Amazon), yet you’re confined to its built-in functionality since it doesn’t yet support user-defined class structures. Nonetheless, the series functionality combined with data passing brings you remarkably close to a true object-oriented approach—and the best part is, you might not even realize it.

Example of a Series Function

I was recently working on the Trend Strength indicator/function and have been mulling this post over for some time, so I thought this would be a good time to write about it.  The following indicator to function conversion will create a function of type series (a function with a memory.)  The name series is very appropriate in that this type of function runs along with the time series of the chart.  It must do this so it can reference prior bar values.

You can ensure a function is treated as a series function in two ways:

  1. Using a Prior Value:
    When you reference a previous value within the function, EasyLanguage automatically recognizes the need to remember past data and treats the function as a series function.

  2. Setting the Series Property:
    Alternatively, you can explicitly set the function’s property to “series” via a dialog. This instructs EasyLanguage to handle the function as a series function, ensuring that state is maintained across bars.

    Manually Set the Function to type Series

Importantly, regardless of the function’s name or even if it’s called within control structures (like an if construct), a series function is evaluated on every bar. This guarantees that the historical data is consistently updated and maintained, which is essential for accurate time series analysis.

Converting an indicator to a function

You can find a wide variety of EasyLanguage indicators online, though many are available solely as indicators. This is fine if you’re only interested in plotting the values. However, if you want to incorporate an indicator into a trading strategy, you’ll need to convert it into a function. For calculation-intensive indicators, it’s best to follow a standard prototype: use inputs for interfacing, perform calculations via function calls, and apply the appropriate plotting mechanisms. By adhering to this development protocol, your indicator functions can be reused across different analysis studies, enhancing encapsulation and modularity. Fortunately, converting an indicator’s calculations into a function is a relatively straightforward process. Here is indicator that I found somewhere.

Inputs: 
ShortLength(13), // Shorter EMA length
LongLength(25), // Longer EMA length
SignalSmoothing(7); // Smoothing for signal line
Vars:
DoubleSmoothedPC(0),
DoubleSmoothedAbsPC(0),
TSIValue(0),
SignalLine(0);
// Price Change (PC)
Value1 = Close - Close[1];
// First Smoothing (EMA of PC and |PC|)
Value2 = XAverage(Value1, LongLength);
Value3 = XAverage(AbsValue(Value1), LongLength);
// Second Smoothing (EMA of First Smoothed Values)
DoubleSmoothedPC = XAverage(Value2, ShortLength);
DoubleSmoothedAbsPC = XAverage(Value3, ShortLength);
// Compute TSI
If DoubleSmoothedAbsPC <> 0 Then
TSIValue = 100 * (DoubleSmoothedPC / DoubleSmoothedAbsPC);

// Compute Signal Line
SignalLine = XAverage(TSIValue, SignalSmoothing);
// Plot the TSI and Signal Line
Plot1(TSIValue, "TSI");
Plot2(SignalLine, "Signal");
TSI via Web
Now we can functionize it.

Below is an example of how you might convert an indicator into a function called TrendStrengthIndex. Notice that the first change is to replace any hard-coded numbers in the indicator’s inputs with parameters declared as numericSimple (or numericSeries where appropriate). This allows the function to accept dynamic values when called.  Not to give anything away, but you can also declare variables as numericRef, numericSeries, numericArrayRef, string, stringRef, and stringArrayRef.  Let’s not worry about these types right now.

inputs: 
ShortLength(numericSimple), // Shorter EMA length
LongLength(numericSimple), // Longer EMA length
SignalSmoothing(numericSimple); // Smoothing for signal line
Inputs must be changed to function nomenclature

Below is an example function conversion for the TrendStrengthIndex indicator.  The plot statements have been commented out since—rather than plotting—the function now passes back the calculated value to the calling program.

Vars:
DoubleSmoothedPC(0),
DoubleSmoothedAbsPC(0),
TSIValue(0),
SignalLine(0);
// Price Change (PC)
Value1 = Close - Close[1];
// First Smoothing (EMA of PC and |PC|)
Value2 = XAverage(Value1, LongLength);
Value3 = XAverage(AbsValue(Value1), LongLength);
// Second Smoothing (EMA of First Smoothed Values)
DoubleSmoothedPC = XAverage(Value2, ShortLength);
DoubleSmoothedAbsPC = XAverage(Value3, ShortLength);
// Compute TSI
If DoubleSmoothedAbsPC <> 0 Then
TSIValue = 100 * (DoubleSmoothedPC / DoubleSmoothedAbsPC);

// Compute Signal Line
SignalLine = XAverage(TSIValue, SignalSmoothing);
// Plot the TSI and Signal Line
//Plot1(TSIValue, "TSI"); commented out
//Plot2(SignalLine, "Signal"); commented out
TrendSTrengthIndex = TSIValue;
Functionalize It!

This works great if we just want the TrendStrengthIndex, but this indicator, like many others has a signal line.  The signal line for such indicators is usually a smoothed version of the main calculation.  Now you could do this smoothing outside the function, but wouldn’t it be easier if we did everything inside of the function?

Oh no!  I need to pass more than one value back!

If we just wanted to pass back TSIValue all we need to do is assign the name of the function to this value.

Passing values by reference

We can adjust the function to return multiple values by defining some of the inputs as numericRef. Essentially, when you pass a variable as a numericRef, you’re actually handing over its memory address—okay, let’s get nerdy for a moment! This means that when the function updates the value at that address, the calling routine immediately sees the change, giving the variable a kind of quasi-global behavior. Without numericRef, any modifications made inside the function stay local and never propagate back to the caller.  Not only is the function communicating with the calling strategy or indicator it is also remember its own stuff for future use.  Take a look at this code.

inputs: 
ShortLength(numericSimple), // Shorter EMA length
LongLength(numericSimple), // Longer EMA length
SignalSmoothing(numericSimple), // Smoothing for signal line

TrendStrength.index(numericRef), // Output
TrendStrength.signal(numericRef); //OutPut
Vars:
DoubleSmoothedPC(0),
DoubleSmoothedAbsPC(0),
SignalLine(0);

// Force series FUNCTION BEHAVIOR
Value4 = Value3[1];
// Price Change (PC)
Value1 = Close - Close[1];
// First Smoothing (EMA of PC and |PC|)
Value2 = XAverage(Value1, LongLength);
Value3 = XAverage(AbsValue(Value1), LongLength);

// Second Smoothing (EMA of First Smoothed Values)
DoubleSmoothedPC = XAverage(Value2, ShortLength);
DoubleSmoothedAbsPC = XAverage(Value3, ShortLength);
// Compute TSI
If DoubleSmoothedAbsPC <> 0 Then
TrendStrength.index = 100 * (DoubleSmoothedPC / DoubleSmoothedAbsPC);

// Compute Signal Line
TrendStrength.signal = XAverage(TrendStrength.index, SignalSmoothing);

TrendStrengthIndex = 1;
Is this a function or is it a class?

There is a lot going on here.  Since we are storing our calculations in the two numericRef inputs, TrendStrength.index and TrendStrength.signal the function name can simply be assigned the number 1.  You only need to do this because the function needs to be assigned something, or you will get a syntax error.  Since we are talking objects, I think it would be appropriate to introduce “dot notation.”  When programming with objects you access the class members and methods buy using a dot.  If you have an exponential moving average class in python you would access the variables and functions (methods) in the class like this.

class ExponentialMovingAverage:
# Class-level defaults serve as initial values.
alpha = 0.2 # Default smoothing factor
ema = None # EMA starts as None

def update(self, price):
"""
Update the EMA with a new price.

Parameters:
price (float): The new price to incorporate.

Returns:
float: The updated EMA value.
"""
# If ema is None, this is the first update
if self.ema is None:
self.ema = price
else:
self.ema = self.alpha * price + (1 - self.alpha) * self.ema
return self.ema

# Create an instance of the class.
ema_calculator = ExponentialMovingAverage()

# Dot notation to access the class attribute.
print("Alpha value:", ema_calculator.alpha)

# Dot notation to access the EMA attribute before any updates.
print("Initial EMA (should be None):", ema_calculator.ema)

# Dot notation to call the update method.
ema_value = ema_calculator.update(10)
print("EMA after update with 10:", ema_value)
Using dot notation to extract values from a class

Since you are using EasyLanguage and a series function, you don’t have to deal with something like this.  On the surface this looks a little gross but coming from a programming background this is quite eloquent.  I only show this to demonstrate dot notation.  In an attempt to mimic dot notation in the EasyLanguage function, I simply add a period ” , ” to the input variable names that will return the numbers we need to plot.  Take a look at the nomenclature I am using.

    TrendStrength.index(numericRef), // Output
TrendStrength.signal(numericRef); //OutPut
Function Name +

I am using the function name a ” . ” and an appropriate variable name.  This is not necessary.  Historically, input names that were modified within a function were preceded by the letter “O.”  In this example, Oindex and Osignal.  This represented “output.”  Remember these naming conventions are all up to you.  Here is the new indicator using our EasyLanguage “Classy” function and our pseudo dot notation nomenclature.

//Utilize the TrendStrengthIndex classsy function

inputs: shortLength1(9), longLength1(19), signalSmoothing1(9);
inputs: shortLength2(19), longLength2(39), signalSmoothing2(13);

vars: trendStrength1.index(0), trendStrength1.signal(0);
vars: trendStrength2.index(0), trendStrength2.signal(0);
value1 = TrendStrengthIndex(shortLength1,longLength1,signalSmoothing1,trendStrength1.index,trendStrength1.signal);
value2 = TrendStrengthIndex(shortLength2,longLength2,signalSmoothing2,trendStrength2.index,trendStrength2.signal);


plot1(trendStrength1.index,"TS-Index-1");
plot2(trendStrength1.signal,"TS-Signal-1");

plot3(trendStrength2.index,"TS-Index-2");
plot4(trendStrength2.signal,"TS-Signal-2");
Take a look at how we access the information we need from the function calls.

You might be surprised to learn that you may have been doing object-oriented programming all along without realizing it. Do you prefer the clarity of dot notation for accessing function output, or would you rather stick with a notation that uses a big “O” combined with the input name to represent functions with multiple outputs? Also, notice how each function call behaves like a new instance—the internal values remain discrete, meaning that each call remembers its own state independently.  In other words, each function call remembers its own stuff.

Two distinct function values from the same function – called twice on the same bar.

Use Python and ChatGPT to fill the gaps of TradeStation

Merge Equity Curves 2.0

Exporting Strategy Performance XML files that use 15 years of 5-minute bars can be cumbersome.  Using Maestro can be cumbersome too.   I know there are great programs out there that will quickly analyze the XML files and do what we are about to do plus a bunch more.  But if you want to do it yourself, try working with 224,377 KB files.

If you want to do a quick merge of equity curves of different markets or systems use this simple method.   That is if you have Python, MatPlotLib and Pandas installed on your computer.  Installing these are really simple and everybody is doing it.   I created this script with the help of a paid version of ChatGPT.

Using Python et al. to plot combined equity curve.

Steps:

1,)  Create a folder on you C drive:  C:\Data

2.)  Put this code into your strategy code.  Make sure you change the output file name for each market/system combo that the strategy is applied to.

if t = sessionEndtime(0,1) then
begin
print(file("C:\Data\Sysname-NQ.csv"),d+19000000:8:0,",",netProfit + Openpositionprofit);
end;
Code snippet to output daily equity

This will create a csv file that looks like this.

20100201,1960.00
20100202,2790.00
20100203,2330.00
20100204,-2290.00
20100205,-3740.00
20100208,-3970.00
20100209,-2360.00
20100210,-3020.00
20100211,-630.00

3.)  Copy and paste the following code into your favorite Python IDE or IDLE.

import tkinter as tk
from tkinter import filedialog
import pandas as pd
import matplotlib.pyplot as plt
import os

def extract_market_name(filename):
"""Extracts the market name from the filename formatted as 'system-market.csv'."""
return os.path.splitext(filename)[0].split('-')[-1]

def load_equity_data(files):
"""Loads and aligns equity data from multiple CSV files."""
data_dict = {}

for file in files:
market = extract_market_name(os.path.basename(file))
df = pd.read_csv(file, header=None, names=["Date", market], dtype={0: str, 1: float})
df["Date"] = pd.to_datetime(df["Date"], format='%Y%m%d')
data_dict[market] = df.set_index("Date")

combined_df = pd.concat(data_dict.values(), axis=1, join='outer').fillna(method='ffill').fillna(method='bfill')
combined_df["Total Equity"] = combined_df.sum(axis=1)
return combined_df

def calculate_metrics(equity_curve):
"""Calculates total profit and maximum drawdown from the equity curve."""
total_profit = equity_curve.iloc[-1] - equity_curve.iloc[0]
peak = equity_curve.cummax()
drawdown = peak - equity_curve
max_drawdown = drawdown.max()
return total_profit, max_drawdown, drawdown

def calculate_correlation(df):
"""Calculates and prints the correlation matrix of equity curves."""
correlation_matrix = df.corr()
print("\n--- Correlation Matrix ---")
print(correlation_matrix.to_string())
print("\n--------------------------")

def plot_equity_and_drawdown(df, drawdown):
"""Plots the combined equity curve and drawdown as separate subplots."""
fig, ax = plt.subplots(2, 1, figsize=(12, 8), sharex=True)

# Plot equity curves
ax[0].set_title("Equity Curve")
ax[0].set_ylabel("Equity Value")
for column in df.columns[:-1]: # Exclude "Total Equity"
ax[0].plot(df.index, df[column], linestyle='dotted', alpha=0.6, label=f"{column} (Individual)")
ax[0].plot(df.index, df["Total Equity"], label="Total Equity", linewidth=2, color='black')
ax[0].legend()
ax[0].grid()

# Plot drawdown
ax[1].set_title("Drawdown")
ax[1].set_xlabel("Date")
ax[1].set_ylabel("Drawdown Amount")
ax[1].plot(drawdown.index, drawdown, color='red', linestyle='solid', alpha=0.7, label='Drawdown')
ax[1].fill_between(drawdown.index, drawdown, color='red', alpha=0.3)
ax[1].legend()
ax[1].grid()

plt.show()

def main():
root = tk.Tk()
root.withdraw()
files = filedialog.askopenfilenames(title="Select Equity Files", filetypes=[("CSV files", "*.csv")])

if not files:
print("No files selected.")
return

combined_df = load_equity_data(files)
total_profit, max_drawdown, drawdown = calculate_metrics(combined_df["Total Equity"])

print("\n--- Performance Metrics ---")
print(f"Total Profit: {total_profit:.2f}")
print(f"Maximum Drawdown: {max_drawdown:.2f}")
print("--------------------------")

calculate_correlation(combined_df.drop(columns=["Total Equity"]))
plot_equity_and_drawdown(combined_df, drawdown)

if __name__ == "__main__":
main()
Equity Curve Merge Python Script

4.)  Run it and multi-select all the system/market .csv files

5.)  Examine the results:

Combined equity, maximum draw down, and correlation matrix

How did I get ChatGPT to code this for me.

I pay $20 a month for Chat and it learns from my workflows.  The more I work with it, the more it knows my end goals to the scripts I am asking it to create.

Here are the prompts I provided Chat:

I want the user to be able to use tkinter to select multiple files that contain two columns: 1-date in yyyymmdd format and 2 – the equity value for that date. The filename consists of the system name and the market name and follows the following patterm: system name”-“market name”.csv”. I need you to extract the market from the filename. If the filename is “Gatts-GC.csv”, the market name is “GC.” Once all the files are opened, I need you to align the dates and create a combined equity curve. We can do this in matplotlib if you like. I need you to calculate the total profit and the maximum drawdown among the combined equity curves.

[SCRIPT FAILED]

My data does not included headers. The first column is the date in yyyymmdd with no separators and the second column is a float value. I think your pandas are having a hard time interpreting this

[SCRIPT WORKED]

That is perfect. Can we had a correlation analysis to the equity curves and print that out too.

[SCRIPT WORKED]

Can we plot the draw down on the same canvas as the equity curves?

[SCRIPT WORKED]

Could we put the draw down in a subplot instead of in the profit plot?

[FINAL SCRIPT] Look above

Am I Worried About Being Replaced by Chat GPT?

With Python and its data visualization libraries and EasyLanguage you can overcome some of the limitations of TradeStation.  And of course, ChatGPT.  I have been programming in Python for nine years now and I am very good at base Python.  However, Pandas are still a mystery to me and interfacing with MatPlotLib is not super simple.   Does it frustrate me that I don’t know exactly what ChatGPT is coding?  Yes, it did at first.  But now I use it as a tool.  I may not be coding the Python, but I am managing ChatGPT through my well thought out prompts and my feedback.  Does my knowledge of Python help streamline this process? – I think it does.  And I have trained ChatGPT during the process.  I am becoming a manager-programmer and ChatGPT is my assistant.

Dr. ChatGPT, or How I Learned to Stop Worrying and Love AI.

Embracing AI: The Journey from Skepticism to Synergy with ChatGPT.

Using TradeStation XML output, Python and ChatGPT to create a commercial level Portfolio Optimizer.

As a young and curious child my parents would buy me RadioShack Science Fair Kits, chemistry sets, a microscope and rockets.  I learned enough chemistry to make rotten egg gas.  I grew protozoa to delve into the microscopic world.  I scared my mom and wowed my cousins with the Estes Der Big Red rocket.  But it wasn’t until one Christmas morning I opened the Digital Computer Kit.  Or course you had to put it together before you could even use it – just like the model rockets.  Hey, you got an education assembling this stuff.  Here is a picture of a glorified circuit board.

My first computer.

This really wasn’t a computer, but more of an education in circuit design, but you could get it to solve simple problems “Crossing the River”, decimal to binary, calculating the cube root and 97 other small projects.  These problems were solved by following the various wiring diagrams.  I loved how the small panels would light up with the right answers, but grew frustrated because I couldn’t get beyond the preprogrammed wiring schema.  I had all these problems I wanted to solve but could not figure out the wiring.  Of course, there were real computers out there such as the HAL 9000.  Just kidding.  I would go to the local Radio Shack and stare at the all the computers.  Hoping one day I would have one sitting on my desk.  My Dad was an aircraft electrician (avionics) in the U.S. Navy with a specialty in Inertial Navigation Systems.  He would always want to talk about switches, gyros, some dude named Georg Ohm and oscilloscopes.  I had my mind stuck in space, you know “3-D Space Battle” – the Apple II game, to listen or to learn from his vast knowledge.    A couple of years later and a paper route I had a proper 16K computer, the TI-99-4A.  During this time, I dreamed of a supercomputer that could answer all my questions and solve all my problems.  I thought the internet was the manifestation of this dream, but in fact it was the Large Language Models such as ChatGPT.

Friend or Foe

From a programmer’s perspective AI can be scary, because you might just find yourself out of a job.  From this experiment, I think we are a few years away from this possibility.  Quantum computing, whenever it arrives, might be a viable replacement, but for now I think we are okay.

The Tools You Will Need

You will need a full installation of Python along with Numpy and Pandas installed on your computer if you want ChatGPT to do some serious coding for you.  Python and its associated libraries are simply awesome.   And Chat loves to use these tools to solve a problem.  I pay $20 a month for Chat so I don’t know if you could get the same code as I did if you have the free version.  You should try before signing up.

The Project:  A Portfolio Optimizer using an Exhaustive Search Engine

About ten years ago, I collaborated with Mike Chalek to develop a parser that analyzes the XML files TradeStation generates when saving strategy performance data. Trading a trend-following system often requires significant capital to manage a large portfolio effectively. However, many smaller traders operate with limited resources and opt to trade a subset of the full portfolio.

This approach introduces a critical challenge: determining which markets to include in the subset to produce the most efficient equity curve. For instance, suppose you have the capital to trade only four markets out of a possible portfolio of twenty. How do you decide which four to include? Do you choose the markets with the highest individual profits? Or do you select the ones that provide the best profit-to-drawdown ratio?

For smaller traders, the latter approach—prioritizing the profit-to-drawdown ratio—is typically the smarter choice. This metric accounts for both returns and risk, making it essential for those who need to manage capital conservatively. By focusing on risk-adjusted performance, you can achieve a more stable equity curve and better protect your account from significant drawdowns.

I enhanced Mike’s parser by integrating an exhaustive search engine capable of evaluating every combination of N markets taken n at a time. This approach allowed for a complete analysis of all possible subsets within a portfolio. However, as the size of the portfolio increased, the number of combinations grew exponentially, making the computations increasingly intensive. For example, in a portfolio of 20 markets, sampling 4 markets at a time results in 4,845 unique combinations to evaluate.

Calculating the number of combinations.

Using the formula above you get 4,845 combinations.  If you estimate each combination to take once second, then you are talking about 21 minutes.  N/2 will produce the most combinations.  Sampling 10 out of 20 will take 51.32 hours.

  • 1 out of 20: 20 combinations
  • 2 out of 20: 190 combinations
  • 3 out of 20: 1,140 combinations
  • 4 out of 20: 4,845 combinations
  • 5 out of 20: 15,504 combinations
  • 6 out of 20: 38,760 combinations
  • 7 out of 20: 77,520 combinations
  • 8 out of 20: 125,970 combinations
  • 9 out of 20: 167,960 combinations
  • 10 out of 20: 184,756 combinations
  • 11 out of 20: 167,960 combinations

The exhaustive method is not the best way to go when trying to find the optimal portfolios across a broad search space.  This is where a genetic optimizer comes in handy.  I played around with that too.  However, for this experiment I stuck with a portfolio of eleven markets.  I used the Andromeda-like strategy that I published in my latest installment of my Easing Into EasyLanguage series.

Here is the output of the project when sampling four markets out of a total of eleven.  All this was produced with Python and its libraries and ChatGPT.

Tabular Format

The best 4-Market portfolio out of 11 possibilities.

Graphic Format

ChatGPT, Python, Matplotlib, oh my!

Step 1 – Creating an XML Parser script

You can save your strategy performance report in RINA XML format.  The ability to save in this format seems to come and go, but my latest version of TradeStation provides this capability.  The XML files are ASCII files that contain every bar of data, market and strategy properties and trades.  However, they are extremely large as each piece of data has prefix and suffix tag.

<StrategyPerformance>
<Market>
<Name>SPY</Name>
<Bars>
<Bar>
<Date>2024-12-01</Date>
<Open>450.25</Open>
<High>455.30</High>
<Low>449.85</Low>
<Close>452.10</Close>
</Bar>
<Bar>
<Date>2024-12-02</Date>
<Open>452.15</Open>
<High>458.40</High>
<Low>451.50</Low>
<Close>457.75</Close>
</Bar>
</Bars>
</Market>
<Trades>
<Trade>
<Type>Buy</Type>
<Date>2024-12-01</Date>
<Price>450.50</Price>
<Quantity>100</Quantity>
</Trade>
<Trade>
<Type>Sell</Type>
<Date>2024-12-02</Date>
<Price>457.50</Price>
<Quantity>100</Quantity>
</Trade>
</Trades>
<PerformanceMetrics>
<NetProfit>700.00</NetProfit>
<Drawdown>50.00</Drawdown>
<ProfitFactor>2.5</ProfitFactor>
</PerformanceMetrics>
</StrategyPerformance>
Small example of XML file

​Imagine the size of the XML when working with one or even five-minute bars.  

Getting ChatGPT to create an XML parser for the specific TradeStation output

  • The first thing I did was save the performance, in XML format, from a workspace in TradeStation that used an Andromeda-like strategy on eleven different daily bar charts.
  • I asked Chat to analyze the XML file that I attached to a new chat.  I started a new chart for this project.  I discovered the chat session is also known as a “workflow”.  This term emphasizes:
    • Collaboration: We work as a team to tackle challenges.
    • Iteration: We revisit and improve upon earlier steps as needed.
    • Focus: Each session builds upon the previous ones to move closer to a defined goal.
  • Once it understood the mapping of the XML, I asked Chat to extract the bar data and output it to a csv file.  And it did it without a hitch.  When I say it did it, I mean it created a Python script that I loaded into my PyScripter IDE and executed.
  • I then asked for an output of the trade-by-trade report, and it did it without a hitch.  Notice these tasks do not require much in the way of reasoning.

Getting ChatGPT to combine the bar and trade data to produce a daily equity stream.

This is where Python and its number crunching libraries came in handy.  Chat pulled in the following libraries:

  • xml.etree
  • pandas
  • tkinter
  • datetime

I love Python, but the level of abstraction with its libraries can make you dizzy.  It is not important to fully understand the panda’s data frame to utilize it.   Heck, I didn’t really know how it mapped and extracted the data from the XML file.  I prompted chat with the following:

[My prompts are bold and italicized.]

With the bar data and the trade data and the bigpointvlaue, can you create an equity curve that shows a combination of open trade equity and closed trade equity?  Remember the top part of the xml file contains bar data and the lower part contains all the trade data.

It produced a script that hung up.  I informed chat that the script hung up and that the script wasn’t working.  It found the error and fixed it.  Think about what Chat was doing. It was able to align the data so that open trades produced open trade equity and closed out trades produced closed trade equity.  Well, initially it had a small problem.    It knew what Buy and Sell meant, and the math involved with calculating the two forms of equity, open and closed.  I didn’t inform Chat of any of this.  But the equity data did not look exactly right.  It looked like the closed trade equity was being calculated improperly.

Is the Script checking for LExit and SExit to calculate when a trade closes out?

Once it figured out that the equity stream must contain open and closed trade equity and learned the terms, LExit and SExit, a new script was created that nearly replicated the equity curve from the TradeStation report.  When Chat starts creating lengthy scripts it will open a side bar window call the “Canvas” and put the script in there.  This makes it easier to copy and paste the code.  I eventually noticed that the equity curve did not include commission and slippage charges.

Please extract the slippage and commission values and deduct this amount from each trade.

At this point the workflow remembered the mapping of the XML file and was able incorporate these two values into the trade processing.  I wanted the user to be able to select multiple XML files and have the script process these files and produce the three output files, bar_data, trade_data, and equity_data.    I did have to explain that the execution costs must be applied to all the entries and exits.

I would like the user to select multiple xml files and create the data and trade and equity files incorporating the system name and market name and their naming scheme.

A new library was imported, TKinter and a file open dialog was used for the user to select all the XML files they wanted to process.  These few chores required very little interaction between Chat and me.  I thought, wow this is going to be a breeze.  I moved onto the next phase of the project by asking Chat the following:

Can you create a script that will use Tkinter to open the equity files from the prior script and allow the user to choose N out of the total files selected to create all the combined equity curves using an exhaustive search method.

I knew this was a BIG ASK!  But it swallowed this big pill without a hitch.  Maybe us programmers will be replaced sooner than later.  I wanted to fine tune the script.

Can you keep track of maximum draw down for each combination and then sort the combinations by the profit to draw down ratio. For the best combination can you create a csv file with the daily returns so i can plot them in Excel.

On a quick scan of the results, I did not initially notice that the maximum draw-down metric was not right.  So, I pushed on the fine tuning.

This works great. In the output files can we delete the path name to the csv files. For the output I would just like to have the system name and the symbol for each combination.

Script created.

I had the following error message. NameError: name ‘all_combinations’ is not defined. I also asked Chat if it can you add “Date” to the header for the combination daily files?

Many times, Chat will just recreate the part of the code that needs to be modified.  This makes copying and pasting difficult, because you have to make sure you find the right location in the code to replace.  I always ask Chat to recreate the entire script with the latest changes.  

Hey Chat, the following error message was created:  system, symbol = filename.replace(“.csv”, “”).split(“_”) ValueError: too many values to unpack (expected 2)

The error was fixed, but I finally noticed the maximum draw down calculations were not right.

I don’t think the maximum draw down metric is being calculated properly. “Andromeda-Like_@SB, Andromeda-Like_@EC, Andromeda-Like_@S, Andromeda-Like_@JY”,
PL:  92141.8499999999,
DD:  92985.59999999992,
Ratio:  0.990926014350609,144.
The draw down metric is similar to the total profit metric.
Script created.

The draw down is still not calculated properly. I think there is an error in your reasoning. I calculate draw down by iterating through the equity curve and keeping track of peaks and subsequent troughs. The largest distance between the peaks and its associated trough is the max draw down.

Script created but looked a little primitive.  So, I asked.

Is this the most efficient method?

Well, I should have kept my mouth shut.  I felt like the Python libraries would be able to do this much quicker with list comprehension or with some other magic.   And Chat agreed with me.  Script created.

The draw down is still incorrect.

Script created.

It was still wrong and instead of going back and using the original draw down script I tried to nudge Chat into the right direction.

The equity curve will have multiple peaks and multiple troughs. You will need to keep track of all of the peaks and troughs and then find the maximum distance between the sets of peaks and troughs.

Script created and was still wrong.  This is where you start going down a rabbit hole.  Chat will produce debugging script to try and figure out where the error resides.  But this is very time consuming, so I kept pushing Chat to do it right.

Each peak must be tied to its respective trough. Each distance between each respective peak and drawdown can then be measured for the largest value.

Script is wrong.

This still not working. I don’t think you are calculating the difference between peaks and subsequent troughs properly.

Script is wrong.

This is still not working. We should be able to do this in a single pass. Iterate through the daily values and track the highest level at that point in the series and then the subsequent deepest trough. Store the first difference. Continue iterating looking for a new peak and the subsequent deepest trough. Compare this value to the prior and take the largest of the two. Store this value and continue iterating.

Script from original request is generated.  Finally got back to the starting point.  What a waste of time.  I will be more careful with my suggestions.  However, the script is still wrong – arrgh!  I decide to run the debugger on the draw down function and the code is right, but the data is wrong.

The problem lies in the equity series. It should contain the combined equity for the combinations. There should be a master date and each combination populates the master date. If there is a missing date in the combinations, then the master date should copy the prior combinations combined value.

Warning message:  se obj.ffill() or obj.bfill() instead.  df_aligned = df.reindex(master_date_index).fillna(method=”ffill”).fillna(0)

Chat created some deprecated code.   This was an easy fix, I just had to replace one line of code.  However, every iteration following this still had the same damn deprecated code.

Error or warning: FutureWarning: Series.getitem treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use ser.iloc[pos] peak = equity_series[0] # Initialize the first value as the peak

Script updated.

Can we drop the “_equity_curve” from the name of the system and symbol in the Perfomance metrics file. Will excel accept the combination names as a single column or will each symbol have its own column because of the comma. I would like for each combination in the Performance_Metrics file to occupy just one column.

Script created and this is what it should look like.  Notice it was remembering the wrong maximum draw down values I fed it earlier.  Don’t worry it was right in the script.

Combination Total Profit Max Drawdown Profit-to-Drawdown Ratio Combination Number
“Andromeda-Like_@SB, Andromeda-Like_@EC” 92141.85 92985.60 0.99 1
“Andromeda-Like_@S, Andromeda-Like_@JY” 82450.55 70530.45 1.17 2

Conclusion

I could have done the same thing ChatGPT did for me, but I wouldn’t have used the Numpy or Pandas libraries simply because I’m not familiar with them. These libraries make the exhaustive search manageable and incredibly efficient. They handle tasks much faster than pure Python alone.

To get ChatGPT to generate the code you need, being a programmer is essential. You’ll need to guide it through debugging, steer it in the right direction, and test the scripts it produces. It’s a back-and-forth process—running the script, identifying warnings or errors, and pointing out incorrect outputs. Sometimes, your programming insights and suggestions might inadvertently lead ChatGPT down a rabbit hole. Features that worked in earlier versions may stop working in subsequent iterations as modifications are applied to address earlier issues.

ChatGPT can also slow down at times, and its Canvas tool has a line limit, which can result in incomplete scripts. As a programmer, it’s easy to spot these issues—you’ll need to inform ChatGPT, and it will adjust by splitting the script into parts, some appearing in the Canvas and the rest in the chat window.

The collaboration between ChatGPT and me was powerful enough to replicate, in just one day, software that Mike Chalek and I spent weeks developing a decade ago. The original version had a cleaner GUI, but it was significantly slower compared to what we’ve achieved here.

If you’re a programmer, have Python installed with its libraries, and work with ChatGPT, the possibilities are endless. But there’s no magic—success requires thoughtful feedback and precise prompting.

Email me if you would like to have the Python scripts that accomplish the following tasks.  If you are not familiar with pandas or xml processing, the code, even being Python savvy, will look a little foreign.  No worries – it just works.

  1. XML Parser – creates data, trades and equity files in .csv format.
  2. TradeStationExhaustiveCombos – creates all the combos when sampling n out of N markets.
  3. The simple Tkinter GUI and Matplotlib graphing tool to plot the combos.

There is a total of three scripts.  Remember you will need to have Python, pandas, matplotlib already installed on your computer.  If you have any questions on how to install these just let me know.