I have data that looks like this:
It's standard financial price data (open, high, low, close).
In addition, I run some calculations.
'major_check' occasionally returns 1 or 2 (which 'minor_check' will then also return).
'minor_check' also returns 1 or 2, but more frequently.
the rest is filled with 0 or NaN.
I'd like to test for specific patterns:
- Whenever there is a 2 in
'major_check', I want to see if I can find a 21212 pattern in'minor_check', with 21 preceding the central 2 and 12 following it. - If there is a 1 in
'major_check', I'd like to find a 12121 pattern in'minor_check'
I highlighted a 21212 pattern in the screenshot to give a better idea on what I am looking for.
Once the 21212 or 12121 patterns are found, I'll check if specific rules applied on open/high/low/close (corresponding to the 5 rows constituting the pattern) are met or not.
Of course, one could naively iterate through the dataframe but this doesn't sound like the Pythonic way to do it.
I didn't manage to find a good way to do this, since a 21212 pattern can have some 0s inside it

As this answer by Timeless looked surprisingly complex, here is a quite simpler one.
Method:
NaNandNone),numpy.whereandpandas.shiftto check for patterns row-wise (faster),pandas.rolling-probably faster, more compact, but still readable.df.You haven't specified how to flag the findings. Here they get marked as a
Truein two new columns, one for each pattern, appended to the original dataframe, for whatever use you would plan for them. They are called"hit1"and"hit2".Input data
No text input data in your post, so until then I came up with my own. It is designed to produce one hit for each pattern:
Locate hits
Skip rows without test results
Pattern search:
.shift()andnp.whererolling, preferred:.rolling()was designed for that purpose exactly.Just too bad they haven't implemented
.rolling().eq()yet (list of window functions).This is why we must resort to
apply.eq()from inside alambdafunction.Finally report back to original df