I have a table in an Azure SQL Server database which is just a single nvarchar(max) column with JSON strings in it like the one below, one per row flattened to a single line (about 200,000 rows)
{
"username": "George84",
"items": [
{
"type": 8625,
"score": "90"
}, {
"type": 8418,
"score": "84"
}, {
"type": 7818,
"score": "90"
}, {
"type": 23,
"score": "q"
}
]
}
Each record has a single username and an array of items with one or more entries. Each entry in the items array contains one type field (integer) and one score field (string). Usernames are unique. Multiple users may have the same type of item, and these duplicates may or may not have the same score.
I want to convert this to a new set of properly normalized tables with a schema like the one below:
I know I could do it with an external application, but I'm hoping to take advantage of SQL Server 2016's new JSON processing capability to do it entirely within the database/TSQL.
What is the best way to accomplish this conversion?

This is the first time for me to work with Json and T-sql but I think its fun, so I think there may be a lot of better solutions, anyhow lets start.
first I would need some data to start with so I will generate some temp data as below:-
This would give me a table that simulate your main data table.
Lets create some memory tables to store the data as below:
You may have the above tables already created so you can replace them. so lets jump to the t-sql that will fill the tables
And thats it, hope this helps .
For the data I provided I got the following results:-
Users Table:-
Types Table:-
Scores Table:-
Items Table:-