Big grep from txt list in .gz file logs

330 Views Asked by At

this is my problem (for me actually a big problem).

I have a txt file with 1.130.395 lines as below an example:

10812
10954
10963
11070
11099
10963
11070
11099
betti.bt
betti12
betti1419432307
19442407
19451970
19461949

i have like 2000 .gz log files.

I need that for every line of the .txt file a grep is performed on all .gz files.

This is an example of the contents of the gz files, an example line:

time=2019-02-28 00:03:32,299|requestid=30ed0f2b-9c44-47d0-abdf-b3a04dbb560e|severity=INFO |severitynumber=0|url=/user/profile/oauth/{token}|params=username:juvexamore,token:b73ad88b-b201-33ce-a924-6f4eb498e01f,userIp:10.94.66.74,dtt:No|result=SUCCESS
time=2019-02-28 00:03:37,096|requestid=8ebca6cd-04ee-4818-817d-30f78ee95731|severity=INFO |severitynumber=0|url=/user/profile/oauth/{token}|params=username:10963,token:1d99be3e-325f-3982-a668-30494cab9a96,userIp:10.94.66.74,dtt:No|result=SUCCESS

The txt file contains the username. I need to search in the gz files if the username is present for the url with "profile" parameters and for "result=SUCCESS".

if something is found, write to a log file only: username found; name of the log file in which it was found

It is possibile to do something? I know that i need to use zgrep command, but can someone help me....it is possibile to automate the process to let it go?

Thanks all

2

There are 2 best solutions below

4
On BEST ANSWER

I'd just do (untested):

zgrep -H 'url=/user/profile/oauth/{token}|params=username:.*result=SUCCESS' *.gz |
awk -F'[=:,]' -v OFS=';' 'NR==FNR{names[$0];next} $12 in names{print $12, $1}' names.txt - |
sort -u

or probably a little more efficient as it removes the NR==FNR test for every line output by zgrep:

zgrep -H 'url=/user/profile/oauth/{token}|params=username:.*result=SUCCESS' *.gz |
awk -F'[=:,]' -v OFS=';' '
    BEGIN {
        while ( (getline line < "names.txt") > 0 ) {
            names[line]
        }
        close("names.txt")
    }
    $12 in names{print $12, $1}' |
sort -u

If a given user name can only appear once in a given log file or if you actually want multiple occurrences to produce multiple output lines then you don't need the final | sort -u.

5
On

A rewrite using getline. It reads and hashes the file.txt usernames, then gunzips gzips given as parameters, splits until gets the field with the username:, extracts the actual username and searches it from the hash. Not properly tested etc. etc. standard disclaimer. Let me know if it worked:

$ cat script.awk
BEGIN{
    while (( getline line < ARGV[1]) > 0 ) {       # read the username file
        a[line]                                    # and hash to a
    }
    close(ARGV[1])
    for(i=2;i<ARGC;i++) {                          # read all the other files
        cmd = "gunzip --to-stdout " ARGV[i]        # form uncompress command
        while (( cmd | getline line ) > 0 ) {      # read line by line
            m=split(line,t,"|")                    # split at pipe
            if(t[m]!="result=SUCCESS")             # check only SUCCESS records
                continue
            n=split(t[6],b,/[=,]/)                 # username in 6th field
            for(j=1;j<=n;j++)                      # split to find it, set to u var:
                if(match(b[j],/^username:/)&&((u=substr(b[j],RSTART+RLENGTH)) in a)) {
                    print u,"found in",ARGV[i]     # output if found in a hash
                        break                      # exit for loop once found
                }
        }
        close(cmd)
    }
}

Run it (using 2 copies of the same data):

$ awk -f script.awk file.txt log-0001.gz log-0001.gz
10963 found in log-0001.gz
10963 found in log-0001.gz