mass update in table storage tables - azure-storage

Is there a way to mass update the TableStorage entities?
Say, I want to rename all the Clients having "York" in the City field to "New-York".
Is there some tools to do it directly (without the need to writing code)?

You could try to use Microsoft Azure Storage Explorer to achieve it.
First, you have some entities in table storage with a City field in your Storage Explorer.
Then you could click Export button to export all your entities to a .csv file.
Enter Ctrl + F and choose Replace item.
Fill the find and replace item with what you want then choose Replace All.
Finally, go back to the Storage Explorer and click Import button to choose the .csv file you have edited before.

I wanted to do the trick with export/import but it's a no go when you have millions of records. I exported all the records and ended up with ~5gb file. Azure Storage Explorer couldn't handle it (my pc i7, 32gb ram).
If someone is also struggling with similar issue, you can do as follow:
Export records to csv file
Remove the lines that you don't want to modify (if needed). You can use grep "i_want_this_phrase" myfile > mynewfile or use -v option to find all that doesn't match the given phrase. If file is too large, split it with some command eg. cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'
Remove everything except the RowKey column.
Prepare az cli command similar to az storage entity merge --connection-string 'XXX' --account-name your_storage -t your_table -e PartitionKey=your_pk MyColumn=false MyColumn#odata.type=Edm.Boolean RowKey=. Remember about odata.type. At first I did an update without this and instead of bools, I switched to strings. Luckily it was easy to fix.
Open the file in VSC, select all with ctrl+a, then shift+alt+i to put a cursor at the end of all lines and then paste previously prepared az cli command. This way you will get a list of az cli updates for each RowKey.
Add #!/bin/bash at the beginning of the file, save as .sh, modify privileges chmod +x yourfile and run.
Of course if you want, you can create some bash script for that and read a file line by line and execute az command. I just did it my way as it was much simpler for me, I'm not so experienced in bash, so it would take me a while to dev&test the script.

Related

Stroing large entry in redis using cli?

I am trying to store data of 5000 characters in redis via cli.
My command is SET MY_KEY "copy pasted the value"
But the whole value is not getting pasted in CLI.
Is there any alternative to it.
I have redis version 3.0.54
But the whole value is not getting pasted in CLI. Is there any alternative to it.
Yes, here is one which works with most modern shells: create a text file with your command(s) and use input redirection.
For example, create a file named commands.txt with your Redis commands:
SET MY_KEY "copy pasted the value"
And pass it to the CLI through input redirection (Bash here, but the syntax is similar if not equal in most modern shells):
redis-cli < commands.txt

Create a csv file of a view in hive and put it in s3 with headers excluding the table names

I have a view in hive named prod_schoool_kolkata. I used to get the csv as:
hive -e 'set hive.cli.print.header=true; select * from prod_schoool_kolkata' | sed 's/[\t]/,/g' > /home/data/prod_schoool_kolkata.csv
that was in EC2-Instance. I want the path to be in S3.
I tried giving the path like :
hive -e 'set hive.cli.print.header=true; select * from prod_schoool_kolkata' | sed 's/[\t]/,/g' > s3://data/prod_schoool_kolkata.csv
But the csv is not getting stored.
I also had a problem that the csv file is getting generated but every column head is having pattern like: tablename.columnname for example prod_schoool_kolkata.id. Is there any way to remove the table names in the csv getting formed.
You have to first install the AWS Command Line Interface.
Refer the Link : Installing the AWS Command Line Interface and follow the relevant installation instructions or go to the Sections at the bottom to get the installation links relevant to your Operating System(Linux/Mac/Windows etc).
After verifying that it's installed properly, you may run normal commands like cp,ls etc over the aws file system. So, you could do
hive -e 'set hive.cli.print.header=true; select * from prod_schoool_kolkata'|
sed 's/[\t]/,/g' > /home/data/prod_schoool_kolkata.csv
aws s3 cp /home/data/prod_schoool_kolkata.csv s3://data/prod_schoool_kolkata.csv
Also see How to use the S3 command-line tool

How to split sql in MAC OSX?

Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!

How to get a list of files modified since date/revision in Accurev

I have created a workspace backed by some collaboration stream. The stream is updated regularly by team members. My goal is to take modified files in a given path and put them to another repository (do it regularly).
The question is how to create a list of files which were modified since a revision or date or ..? (I don't know which approach is the best.) The command line is preferable.
Once I get the file list I create an automating script to take the files from one place and put them to another.
accurev hist -s Your_Stream -t "2013/05/16 01:00:00"-now -a -fl
You can run accurev stat -m -fx and then parse resulting XML. element elements will have modTime attribute, which is the UNIX timestamp when the file was modified.

Is there batch script command where I can change variable values in a .sql file?

I am creating a batch file where I am restoring a database from an IP address and then executing a couple .sql files onto the database. In a couple of the .sql files there are variables declared and set. But this process has to be done on many machines with different values for each variable in each machine.
So I'm able to restore the database through user input of the IP, but I'm not sure how to use the batch script command to change the variable values.
For example, in one of the .sql files, a variable #store was declared and set to some random number. I want to change that number through the batch file.
I am using windows and sql server express 2008 r2
You can use "scripting variables" with SQLCMD.
Here's an example from that MSDN page:
You can also use the -v option to set a scripting variable that exists in a script. In the following script (the file name is testscript.sql), ColumnName is a scripting variable.
USE AdventureWorks2012;
SELECT x.$(ColumnName)
FROM Person.Person x
WHERE c.BusinessEntityID < 5;
You can then specify the name of the column that you want returned by using the -v option:
sqlcmd -v ColumnName ="FirstName" -i c:\testscript.sql
To return a different column by using the same script, change the value of the ColumnName scripting variable.
sqlcmd -v ColumnName ="LastName" -i c:\testscript.sql
If you are working on a Unix / Linux system, you can use sed to search a string.
Example: Assuming you need to replace 127.0.0.1 to 192.168.1.1, you can use the following instruction:
$ sed 's/127.0.0.1/192.168.1.1/g' script.sql > newScript.sql
This will replace the old ip in script.sql and will save a copy of this script in newScript.sql.
On windows, I don't know how to do it, but you can always download and install Cygwin to do exactly as above.
Hope this helps you.