Dataframe row of JSON list for training ML with scikit

211 Views Asked by At

I'm trying to do multivariate classification with Sktime over a set of JSON files organized as experiments.

The input is the following structure:

[
  { v: 431, t: 2, d1: 986000, d2: 434000, X: 0 },
  { v: 77, t: 0, d1: 47000, d2: 613000, X: 0 },
  { v: 58, t: 1, d1: 197000, d2: 47000, X: 0 },
  { v: 77, t: 0, d1: 260000, d2: 213000, X: 0 }
]

Labels for classification are set as a DataFrame with shape (len(files), 1). The following is my implementation with six files. The resulting shape for X is (9528, 5) and should be six rows each containing the JSON of the file:

import json
import pandas as pd
import numpy as np
from pandas import json_normalize
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sktime.classification.compose import ColumnEnsembleClassifier
from sktime.classification.compose import TimeSeriesForestClassifier
from sktime.classification.dictionary_based import BOSSEnsemble
# from sktime.classification.shapelet_based import MrSEQLClassifier
from sktime.datasets import load_basic_motions
from sktime.transformers.series_as_features.compose import ColumnConcatenator
from sklearn.model_selection import train_test_split


controls = [
    '_clean_control01.json',
    '_clean_control02.json',
    '_clean_control03.json',
]

exp = [
    '_clean_exp01.json',
    '_clean_exp02.json',
    '_clean_exp03.json',
]

testsets = {
    'control': controls,
    'exp': exp
}

map_experiments = {
    'control': 0,
    'exp': 1
}

normalized_data = {
    'control': [],
    'exp': []
}

experiments = pd.DataFrame()
labels = {'exp': []}

for experiment in testsets:
    files = testsets[experiment]
    arr = normalized_data[experiment]
    for file in files:
        tmp = pd.read_json(file)
        experiments = experiments.append(tmp, ignore_index=True)
        label = map_experiments[experiment]
        labels['exp'].append(label)

labels = pd.DataFrame(labels)

X, y = experiments, labels
X_train, X_test, y_train, y_test = train_test_split(
    X, y, shuffle=False, stratify=None)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
print(X_train.head())
np.unique(y_train)

clf = ColumnEnsembleClassifier(estimators=[
    ("TSF0", TimeSeriesForestClassifier(n_estimators=100), [0]),
    ("BOSSEnsemble3", BOSSEnsemble(max_ensemble_size=5), [3]),
])
clf.fit(X_train, y_train)
print(clf.score(X_test, y_test))

I've had troubles finding info how to build dataframes where each row represents a list of encoded or unencoded JSON or CSV or other objects representing a time series without a timestamp. I see examples where the JSON is encoded to numeric keys, while others have strings. I couldn't find anything to help me so far build a dataframe with these lists over a series of files.

1

There are 1 best solutions below

0
On

It turned out I was looking for nesting ndarray in a DataFrame as the following:

    experiments = pd.DataFrame(['exp'])
    for file in files:
        tmp = pd.read_json(file).to_numpy()
        experiments = experiments.append({'exp': tmp}, ignore_index=True)