I know this question has been asked many times. I tried several solutions but I couldn't solve my problem.
I have a large nested JSON file (1.4GB) and I would like to make it flat and then convert it to a CSV file.
The JSON structure is like this:
{
"company_number": "12345678",
"data": {
"address": {
"address_line_1": "Address 1",
"locality": "Henley-On-Thames",
"postal_code": "RG9 1DP",
"premises": "161",
"region": "Oxfordshire"
},
"country_of_residence": "England",
"date_of_birth": {
"month": 2,
"year": 1977
},
"etag": "26281dhge33b22df2359sd6afsff2cb8cf62bb4a7f00",
"kind": "individual-person-with-significant-control",
"links": {
"self": "/company/12345678/persons-with-significant-control/individual/bIhuKnFctSnjrDjUG8n3NgOrl"
},
"name": "John M Smith",
"name_elements": {
"forename": "John",
"middle_name": "M",
"surname": "Smith",
"title": "Mrs"
},
"nationality": "Vietnamese",
"natures_of_control": [
"ownership-of-shares-50-to-75-percent"
],
"notified_on": "2016-04-06"
}
}
I know that this is easy to accomplish with pandas
module but I am not familiar with it.
EDITED
The desired output should be something like this:
company_number, address_line_1, locality, country_of_residence, kind,
12345678, Address 1, Henley-On-Thamed, England, individual-person-with-significant-control
Note that this is just the short version. The output should have all the fields.
Please scroll down for the newer, faster solution
This is an older question, but I struggled the entire night to get a satisfactory result for a similar situation, and I came up with this:
Explanation:
The cross_join function is a neat way I found to do a cartesian product. (credit: here)
The json_to_dataframe function does the logic, using pandas dataframes. In my case, the json was deeply nested, and I wanted to split dictionary key:value pairs into columns, but the lists I wanted to transform into rows for a column -- hence the concat -- which I then cross join with the upper level, thus multiplying the records number so that each value from the list has its own row, while the previous columns are identical.
The recursiveness creates stacks that cross join with the one below, until the last one is returned.
Then with the dataframe in a table format, it's easy to convert to CSV with the "df.to_csv()" dataframe object method.
This should work with deeply nested JSON, being able to normalize all of it into rows by the logic described above.
I hope this will help someone, someday. Just trying to give back to this awesome community.
---------------------------------------------------------------------------------------------
LATER EDIT: NEW SOLUTION
I'm coming back to this as while the dataframe option kinda worked, it took the app minutes to parse not so large JSON data. Therefore I thought of doing what the dataframes do, but by myself:
OUTPUT:
As per what the above does, well, the cross_join function does pretty much the same thing as in the dataframe solution, but without dataframes, thus being faster.
I added the flatten_list generator as I wanted to make sure that the JSON arrays are all nice and flattened, then provided as a single list of dictionaries comprising of the previous key from one iteration before assigned to each of the list's values. This pretty much mimics the pandas.concat behaviour in this case.
The logic in the main function, json_to_dataframe is then the same as before. All that needed to change was having the operations performed by dataframes as coded functions.
Also, in the dataframes solution I was not appending the previous heading to the nested object, but unless you are 100% sure you do not have conflicts in column names, then it is pretty much mandatory.
I hope this helps :).
EDIT: Modified the cross_join function to deal with the case when a nested list is empty, basically maintaining the previous result set unmodified. The output is unchanged even after adding the empty JSON list in the example JSON data. Thank you, @Nazmus Sakib for pointing it out.