How to write match results from .cypher into textfile via cypher shell (Windows)? - cypher

I want to write match results based on cypher code inside a cypher file via cypher-shell into a text file (I am trying to do this on Windows). The cypher file contains: :beginmatch(n) return n;:commit
I tried to execute:
type x.cypher | cypher-shell.bat -u user -p secret > output.txt I get no error. But at the end there is just an empty text file "output.txt" inside the bin folder. Testing the cypher code directly in the cypher-shell (without piping) works. Can anyone help, please?

consider using the apoc library
that you can export to different formats, maybe it can help you.
Export to CSV
Export to JSON
Export to Cypher Script
Export to GraphML
Export to Gephi
https://neo4j.com/labs/apoc/xx/export/ --> xx your version Neo4j,example 4.0

Related

mass update in table storage tables

Is there a way to mass update the TableStorage entities?
Say, I want to rename all the Clients having "York" in the City field to "New-York".
Is there some tools to do it directly (without the need to writing code)?
You could try to use Microsoft Azure Storage Explorer to achieve it.
First, you have some entities in table storage with a City field in your Storage Explorer.
Then you could click Export button to export all your entities to a .csv file.
Enter Ctrl + F and choose Replace item.
Fill the find and replace item with what you want then choose Replace All.
Finally, go back to the Storage Explorer and click Import button to choose the .csv file you have edited before.
I wanted to do the trick with export/import but it's a no go when you have millions of records. I exported all the records and ended up with ~5gb file. Azure Storage Explorer couldn't handle it (my pc i7, 32gb ram).
If someone is also struggling with similar issue, you can do as follow:
Export records to csv file
Remove the lines that you don't want to modify (if needed). You can use grep "i_want_this_phrase" myfile > mynewfile or use -v option to find all that doesn't match the given phrase. If file is too large, split it with some command eg. cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'
Remove everything except the RowKey column.
Prepare az cli command similar to az storage entity merge --connection-string 'XXX' --account-name your_storage -t your_table -e PartitionKey=your_pk MyColumn=false MyColumn#odata.type=Edm.Boolean RowKey=. Remember about odata.type. At first I did an update without this and instead of bools, I switched to strings. Luckily it was easy to fix.
Open the file in VSC, select all with ctrl+a, then shift+alt+i to put a cursor at the end of all lines and then paste previously prepared az cli command. This way you will get a list of az cli updates for each RowKey.
Add #!/bin/bash at the beginning of the file, save as .sh, modify privileges chmod +x yourfile and run.
Of course if you want, you can create some bash script for that and read a file line by line and execute az command. I just did it my way as it was much simpler for me, I'm not so experienced in bash, so it would take me a while to dev&test the script.

Export PSQL table to CSV file using cmd line args

I am attempting to export my PSQL table to a CSV file. I have read many tips online, such as Save PL/pgSQL output from PostgreSQL to a CSV file
From these posts I have been able to figure out
\copy (Select * From foo) To '/tmp/test.csv' With CSV
However I do not want to specify the path, I would like the user to be able to enter it via the export shell script. Is it possible to pass args to a .sql script from a .sh script?
I was able to figure out the solution.
In my bash script I used something similar to the following
psql $1 -c "\copy (SELECT * FROM users) TO '"$2"'" DELIMINTER ',' CSV HEADER"
This code copies the user table from the database specified by the first argument and exports it to a csv file located at the second arguement

Apache pig load multiple files

I have the following folder structure containing my content adhering to the same schema -
/project/20160101/part-v121
/project/20160105/part-v121
/project/20160102/part-v121
/project/20170104/part-v121
I have implemented a pig script which uses JSONLoader to load & processes individual files. However I need to make it generic to read all the files under the dated folder.
Right now I have managed to extract the file paths using the following -
hdfs -ls hdfs://local:8080/project/20* > /tmp/ei.txt
cat /tmp/ei.txt | awk '{print $NF}' | grep part > /tmp/res.txt
Now I need to know how do I pass this list to pig script so that my program runs on all the files.
We can use regex path in LOAD statement.
In your case the below statement should help, let me know if you face any issues.
A = LOAD 'hdfs://local:8080/project/20160102/*' USING JsonLoader();
Assuming .pig_schema (produced by JsonStorage) in the input directory.
Ref : https://pig.apache.org/docs/r0.10.0/func.html#jsonloadstore

Hive output to xlsx

I am not able to open an .xlsx file. Is this the correct way to output the result to an .xlsx file?
hive -f hiveScript.hql > output.xlsx
hive -S -f hiveScript.hql > output.xls
This will work
There is no easy way to create an Excel (.xlsx) file directly from hive. You could output you queries content to an older version of Excel (.xls) by the answers given above and it would open in Excel properly (with an initial warning in latest versions of Office) but in essence it is just a text file with .xls extension. If you open this file with any text editor you would see the contents of the query output.
Take any .xlsx file on your system and open it with a text editor and see what you get. It will be all junk characters since that is not a simple text file.
Having said that there are many programming languages that allow you to convert/read a text file and create xlsx. Since no information is provided/requested on this I will not go into details. However, you may use Pandas in Python to create excels.
output csv or tsv file, and I used Python to do converting (pandas library)
I am away from my setup right now so really cannot test this. But you can give this a try in your hive shell:
hive -f hiveScript.hql >> output.xls

In Lua, how to print the console output into a file (piping) instead of using the standard output?

I workin' with Torch7 and Lua programming languages. I need a command that redirects the output of my console to a file, instead of printing it into my shell.
For example, in Linux, when you type:
$ ls > dir.txt
The system will print the output of the command "ls" to the file dir.txt, instead of printing it to the default output console.
I need a similar command for Lua. Does anyone know it?
[EDIT] An user suggests to me that this operation is called piping. So, the question should be: "How to make piping in Lua?"
[EDIT2] I would use this # command to do:
$ torch 'my_program' # printed_output.txt
Have a look here -> http://www.lua.org/pil/21.1.html
io.write seems to be what you are looking for.
Lua has no default function to create a file from the console output.
If your applications logs its output -which you're probably trying to do-, it will only be possible to do this by modifying the Lua C++ source code.
If your internal system has access to the output of the console, you could do something similar to this (and set it on a timer, so it runs every 25ms or so):
dumpoutput = function()
local file = io.write([path to file dump here], "w+")
for i, line in ipairs ([console output function]) do
file:write("\n"..line);
end
end
Note that the console output function has to store the output of the console in a table.
To clear the console at the end, just do os.execute( "cls" ).