Need help using awk or similar to print/output a partial line of a JSON file [closed] - awk

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In the following example, I need to readjust the content within the 2nd set of quotes on line 5, up to, but not beyond the decimal point.
The contents of the quotes vary so everything between " and . must be captured and cannot be matched by using a search string based on any contents between.
It is also possible that in the future the line number may change, however, the line can always be found by searching for "Item".
The process should utilize awk, grep, cat, sed or a combination of them due to the limitations of the proprietary environment/OS. I have searched around but wasn't able to find anything that would work as desired.
filename: data.json
{
"Brand": "Marketside",
"Price": "3.97",
"SKU": "48319448",
"Item": "12-ct_Large_Grade_A(Brown_Organic).48319448",
}
An Example of a successful output would be:
12-ct_Large_Grade_A(Brown_Organic)

The requirement to rely exclusively on line-oriented tools to manipulate JSON seems extremely misdirected. When manipulating structured formats, use tools which understand the structured format.
jq '.Item|split(".")[0]' data.json
to extract up to the first dot; or
jq '.Item|sub("[.][^.]*$";"")' data.json
to discard the text from the last dot until the end of the field.
(jq doesn't like the superfluous last comma after the Item in your pseudo-JSON, though.)
There is no doubt in anyone's mind that your acute problem as stated can be solved with a simple Awk or sed script. What happens then - what already happened here - is that you discover additional requirements which were not obvious from the toy example you posted. A proper, portable solution copes with JSON samples with strings with embedded commas and escaped double quotes, and continues to work when the superficial JSON format changes because a component somewhere upstream is updated to put all the JSON on a single line or whatever.

Here is an awk:
awk -F'.' '/Item/{split(substr($0,1,L=length($0)-length($NF)-1),a,"\"");print a[4]}'
12.ct.Large.Grade.A(Brown_Organic)
It search for Item and then print from " to latest .
Split the string by .
Find the length of latest part after the split length($NF)
Extract this lengt from total to find position of latest . length($0)-length($NF)
Then split the the first part by " and print the 4th part.

Related

Returning the position of pattern matches in a text file with multiple lines [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a long text file with the following format:
>foo_bar
TATGTTCTGCAACTGTATAATGGTATAAAAACATTGCAAAATGTAATGAAACTTGTTATTTTGTGAAATACATTCTATAAATATCACTATTTCATGAAAA
ATATTGAAAATCATTTATTTTCGACAAGTAGAACCATAGGTTCTGTAATTGTAAATAGTTCTGCAAACTTAACCTGTTTTGCAGAAGAATATGTTTTCAC
TAGTTAACTTGTAGAATGTTTAGGATTGTTAAAATTTTTAACAAAATAAGATTTTATAGAACATGATTTGCAAAATAACACATTTTGCAATATTTTTATA
CCATATATAGTTGCAGAACATATGGGGACTACGGGCAGCCGGTAAATATGTGGACTACATGGAACTTGTTCAGATACATCTGGAGCAAAGAGCCACCGCT
CTAAATTATCTCTTCTCATTTCCAGTATTATATCTCTCATGCTAAATTATCTCTACAAATCATGACCTCTCTTAGCAATCTCCCTGAGCATCTCCGTAGG
GAGCAGATATTCACCCGTCTTCCGATGAAAGACCTAATGGTCCTCGCATCTGCAAGTCATGTCTTGCGTTAATCTTTCTCTCTCTTTTTGTGGAATCCCA
TCTCTCCTCTTATCAACTAAACCAGATACAGTTTGCACCAACTTTCTTCACTCCCCTGTTACATGAGAAGGCCAGACTTAGGTAGCTTCTGAATCAGAAC
CCGGTCATTCCAAGCATGGGATTTCTTGTTGATCTCTTGTTTTTATGTAATAGTGATCATTTGATATCTGGTGTTGATGGGAATTCAGATGTATGGGACT
TTGTTTATTGTTGATGTGGAATTCTTATATTTTACTGTGTACTATAAAATTTTAGTGATACCTACTATCTATTGTATAAATTGATTAATTGATGTTCTTA
>bar_foo
TATGTTCTGCAACTGTATAATGGTATAAAAACATTGCAAAATGTAATGAAACTTGTTATTTTGTGAAATACATTCTATAAATATCACTATTTCATGAAAA
ATATTGAAAATCATTTATTTTCGACAAGTAGAACCATAGGTTCTGTAATTGTAAATAGTTCTGCAAACTTAACCTGTTTTGCAGAAGAATATGTTTTCAC
TAGTTAACTTGTAGAATGTTTAGGATTGTTAAAATTTTTAACAAAATAAGATTTTATAGAACATGATTTGCAAAATAACACATTTTGCAATATTTTTATA
CCATATATAGTTGCAGAACATATGGGGACTACGGTACTACGGTAAATATGTGGACTACATGGAACTTGTTCAGATACATCTGGAGCAAAGAGCCACCGCT
CTAAATTATCTCTTCTCATTTCCAGCTGCATATCTCTCATGCTAAATTATCTCTACAAATCATGACCTCTCTTAGCAATCTCCCTGAGCATCTCCGTAGG
GAGCAGATATTCACCCGTCTTCCGATGAAAGACCTAATGGTCCTCGCATCTGCAAGTCATGTCTTGCGTTAATCTTTCTCTCTCTTTTTGTGGAATCCCA
TCTCTCCTCTTATCAACTAAACCAGATACAGTTTGCACCAACTTTCTTCACTCCCCTGTTACATGAGAAGGCCAGACTTAGGTAGCTTCTGAATCAGAAC
CCGGTCATTCCAAGCATGGGATTTCTTGTTGATCTCTTGTTTTTATGTAATAGTGATCATTTGATATCTGGTGTTGATGGGAATTCAGATGTATGGGACT
TTGTTTATTGTTGATGTGGAATTCTTATATTTTACTGTGTACTATAAAATTTTAGTGATACCTACTATCTATTGTATAAATTGATTAATTGATGTTCTTA
I.e., there is a header line which begins with a ">", and then an arbitrary number of lines with no more than 100 letters in them. I would like to find the positions within the non-header lines that match either "GCAGC" or "GCTGC". Overlapping match sites would both get recorded individually.
An example output would be a three column text file where the first column contained the header line for that block of text minus the ">", the second column contained the start position of a pattern match (i.e., the number of characters into the text block, excluding line-break characters), and the third column recorded which of the two patterns were matched. E.g.:
foo_bar 109 GCAGC
bar_foo 58289 GCTGC
Not sure how complex this task is, and in particular whether there is a memory-efficient way to perform this operation in a streaming fashion. awk or sed seem like two utilities which might work, but the required command is beyond my limited understanding of the programs.
A tiny tweak on yesterdays answer:
sub(/^>/,"") {
hdr = $0
next
}
{
while ( match($0,/GC[AT]GC/) ) {
print hdr, RSTART, substr($0,RSTART,RLENGTH)
$0 = substr($0,1,RSTART-1) " " substr($0,RSTART+1)
}
}
Please get the book Effective AWK Programming, 5th Edition, by Arnold Robbins to learn the basics of awk.

How to use awk delimiters in a large csv with text fields with commas [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a .csv with 470 columns and tens of thousands of rows of products, many with text strings including commas, that cause my awk statements to blow out and write to the wrong columns thus corrupting my data. Here are the statements I'm using:
Input example:
LDB0010-300,"TIMELESS DESIGN: Classic, Elegant, Beautiful",Yes,1,Live,...
LDB0010-400,"CLASSIC DESIGN: Contemporary look",No,0,Not Live,...
LDB0010-500,"Everyone should wear this, almost!",Yes,0,Not Live,...
Code:
cat products.csv | sed -e 's/, /#/g' | awk -F, 'NR>1 {$308="LIVE" ; $310="LIVE" ; $467=0 ; print $0}' OFS=, | sed -e 's/#/, /g'
Current output, which is wrong with data written in the wrong columns:
LDB0010-300,"TIMELESS DESIGN: Classic",LIVE, Beautiful",Yes,1,Live,...
LDB0010-400,"CLASSIC DESIGN: Contemporary look",No,0,0,...
LDB0010-500,"Everyone should wear this",LIVE,Yes,0,Not Live,...
When studying the data closer, I noticed that in the cells with text descriptions, commas were always followed with a space, whereas commas used as delimiters had no space after them. So the approach I took was to substitute comma-space with '#', run my awk statement to set the values of those columns, then substitute back from '#' to comma-space. This all looked pretty good until I opened the spreadsheet and noticed that there were many rows with values written into the wrong columns. Does anyone know a better way to do this that will prevent these blow outs?
The sample data you posted does not reproduce the symptoms you report with the code you provided. The absolutely simplest explanation is that your observation that commas with a space are always field-internal and other commas are not is in fact incorrect. This should be easy enough to check;
sed 's/, /#/g' products.csv | awk -F, '{ a[NF]++ } END { for (n in a) print n, a[n] }'
If you don't get a single line of output with exactly the correct number of columns and rows, you can tell that your sed trick is not working correctly. (Notice also the fix for the useless cat.)
Anyway, here is a simple Python refactoring which should hopefully be obvious enough. The Python CSV library knows how to handle quoted fields so it will only split on commas which are outside double quotes.
#!/usr/bin/env python3
import csv
import sys
w = csv.writer(sys.stdout)
for file in sys.argv[1:]:
with open(file, newline='') as inputfile:
r = csv.reader(inputfile)
for row in r:
row[307] = "LIVE"
row[309] = "LIVE"
row[466] = 0
w.writerow(row)
Notice how Python's indexing is zero-based, whereas Awk counts fields starting at one.
You'd run it like
python3 this_script.py products.csv
See also the Python csv module documentation for various options you might want to use.
The above reads all the input files and writes the output to standard output. If you just want to read a single input file and write to a different file, that could be simplified to
#!/usr/bin/env python3
import csv
import sys
with open(sys.argv[1], newline='') as inputfile, open(sys.argv[2], 'w', newline='') as outputfile:
r = csv.reader(inputfile)
w = csv.writer(outputfile)
header = True
for row in r:
if not header: # don't muck with first line
row[307] = "LIVE"
row[309] = "LIVE"
row[466] = 0
w.writerow(row)
header = False
You'd run this as
python3 thisscript.py input.csv output.csv
I absolutely hate specifying the output file as a command-line argument (we should have an option instead) but for a quick one-off, I guess this is acceptable.

removing space for a url string inside a text file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a very big text file (1 GB) and I see that there are few places where the http url field has a space there.
For example in the lines below we have space between "brad pitt" and "[30 wet=]". They should be changed to "bradpitt" and "[30wet=]" but they can occur in any url or trim_url. I am currently finding these places using my program and then manually fixing it vim. Is there a way using awk/sed we can do it?
0.0 q:hello url:http://sapient.com/bapper/30/brad pitt/C345/surf trim_url:http://sapient.com/bapper/30/brad pitt/C345 rating:good
0.0 q:hello url:http://sick.com/bright/[30 wet=]/sound trim_url:http://sick.com/bright/[30 wet=]rating:good
What I tried to do was sed:
sed -i -e 's/*http*[:space:]*/*http*/g' test.txt
Using perl and a proper module to URI encode the URL:
perl -MURI::Escape -pe 's!(https?://)(.*)!$1 . uri_escape($2)!e' file
You even can replace the file in place with -i switch (just like sed) perl -MURI::Escape -i -pe [...]
Output
0.0 q:hello url:http://sapient.com%2Fbapper%2F30%2Fbrad%20pitt%2FC345%2Fsurf%20trim_url%3Ahttp%3A%2F%2Fsapient.com%2Fbapper%2F30%2Fbrad%20pitt%2FC345%20rating%3Agood
0.0 q:hello url:http://sick.com%2Fbright%2F%5B30%20wet%3D%5D%2Fsound%20trim_url%3Ahttp%3A%2F%2Fsick.com%2Fbright%2F%5B30%20wet%3D%5Drating%3Agood
URI::Escape - Percent-encode and percent-decode unsafe characters
Note
As msanford said in comments, spaces in a URL are meaningful. You can't decide to cut them without breaking the link in something that just become not reachable

Need suggestions with reading text files by every n-th line in Raku [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am looking for some suggestions on how I can read text files by every n-th file in Raku/perl6.
In bioinformatics research, sometimes we need to parse text files in a somewhat less than straightforward manner. Such as Fastq files, which store data in groups of 4 lines at a time. Even more, these Fastq files come in pairs. So if we need to parse such files, we may need to do something like reading 4 lines from the first Fastq file, and reading 4 lines from the second Fastq file, then read the next 4 lines from the first Fastq file, and then read the next 4 lines from the second fastq file, ......
May I have some suggestions regarding what is the best way to approach this problem? Raku's "IO.lines" approach seems to be able to handle each line one at a time. but not sure how to handle every n-th line
An example fastq file pair: https://github.com/wtwt5237/perl6-for-bioinformatics/tree/master/Come%20on%2C%20sister/fastq
What we tried before with "IO.lines": https://github.com/wtwt5237/perl6-for-bioinformatics/blob/master/Come%20on%2C%20sister/script/benchmark2.p6
Reading 4 lines at a time from 2 files and processing them into a single thing, can be easily done with zip and batch:
my #filenames = <file1 file2>;
for zip #filenames.map: *.IO.lines.batch(4) {
# expect ((a,b,c,d),(e,f,g,h))
}
This will keep producing until at least one of the files is fully handled. An alternate for batch is rotor: this will keep going while both files fill up 4 lines completely. Other ways of finishing the loop are with also specifying the :partial flag with rotor, and using roundrobin instead of zip. YMMV.
You can use the lines method. Raku Sequences are lazy. This means that iterating over an expression like "somefile".IO.lines will only ever read one line into memory, never the whole file. In order to do the latter you would need to assign the Sequence to an Array.
The pairs method helps you getting the index of the lines. In combination with the divisible by operator %% we can write
"somefile".IO.lines.pairs.grep({ .key && .key %% 4 }).map({ .value })
in order to get a sequence of every 4th line in a file.

Setting A variable with the first word from another variable [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to pull the first word from a variable in my batch script to another variable.
ex
if %hello% had "apples are awesome" in it and was pulled and put into %hi%
%hi% would say "apples"
thanks in Advance
This can be done using a for loop:
for /f %%h in ("%hello%") do [command that uses %%h]
The behaviour of "for" in this circumstance is to split its input up into lines (there is only one, assuming there are no newline characters in your input variable), then split each line into tokens on spaces (you can change the delimiter using the "delim=[chars]" option) and pass the first token of each line to the specified command (you can use "tokens=n,n,..." to get at tokens other than the first on the line).
Note that AIUI you can only use a single letter variable name for the variable to receive the word, so you can't use %%hi as you requested.
(This is all untested, as I'm not at a machine running Windows at the moment, but ought to work if I'm reading the documentation correctly.)
set "hi="
for %%h in (%hello%) do if not defined hi set "hi=%%h"
echo %hi%
should work, as would
set "hi="
for %%h in (%hello%) do set "hi=%%h"&goto done
:done
echo %hi%
Note that the set "var=string" syntax ensures that trailing spaces on the line, as left by some editors, are not included in the value assigned.
You don't say clearly whether the value of hello is apples are awesomeor "apples are awesome" - the first is a string of three words with space-separators, the second a single string containing one "word" which contains spaces. I've assumed the former.