Running into issues with importing multiple CSV's into different collections using Keen's Command Line Interface - keen-io

I have no problem importing one CSV file into a collection by using the keen-cli command
keen events:add -p xxxxxx -k xxxxxx -w xxxxxx -r xxxxxx --collection xxxxx --file xxxxxx.csv --csv
With -p, -k, -w, -r being key overrides to avoid installing dotenv and having to mess with an .env file.
This import works fine the first time until I try to repeat this with a different file and a different collection. By goal is to be able to compare both collections so I want to keep them in the same project but I always get the following error.
/Library/Ruby/Gems/2.0.0/gems/keen-cli-0.2.3/lib/keen-cli.rb:16:in `deep_merge': undefined method `keys' for "15:53:30":String (NoMethodError)
I know Keen allows for multiple collections in the same project so I'm not 100% sure what's going on here.

If one file is working and a seemingly identical file isn't working, try inspecting the not-working file for:
misc or missing spaces, commas, and carriage returns at the end of the text
misc or missing spaces, commas, and carriage returns at the end of lines

Figured it out!
Keen assumes that whatever your first column is will act as your unique key for the collection.
If there is any missing values in that list the entire thing fails.

Related

How to extract the strings in double quotes for localization

I'm trying to extract the strings for localization. There are so many files where some of the strings are tagged as NSLocalizedStrings, and some of them are not.
I'm able to grab the NSLocalizedStrings using ibtool and genstrings, but I'm unable to extract the plain strings without NSLocalizedString.
I'm not good at regex, but I came up with this "[^(]#\""
and with the help of grep:
grep -i -r -I "[^(]#\"" * > out.txt
It worked, and all the strings were actually grabbed into a txt file, but the problem is ,
if in my code there is a line:
..... initWithTitle:#"New Sketch".....
I only expect the grep to grab the #"New Sketch" part, but it grabs the whole line.
So in the out.txt file, I see initWithTitle:#"New Sketch", along with some unwanted lines.
How can I write the regex to grab only the strings in double quotes ?
I tried the grep command with the regex mentioned in here, but it gave me syntax error .
For ex, I tried:
grep -i -r -I (["'])(?:(?=(\\?))\2.)*?\1 * > out.txt
and it gave me
-bash: syntax error near unexpected token `('
In xcode, open your project. Go to Editor->Export For Localization...It will create the folder of files. Everything that was marked for localization will be extracted there. No need to parse it yourself. It will be in the XML format.
If you wanna go hard way, you can then parse those files the way you're trying to do it now ?! It will also have Storyboard strings there too, btw.

IntelliJ: Dynamically updated file header

By default, IntelliJ Idea will insert (something like) the following as the header of a new source file:
/**
* Created by JohnDoe on 2016-04-27.
*/
The corresponding template is:
/**
* Created by ${USER} on ${DATE}.
*/
Is it possible to update this template so that it inserts the last date of modification when the file is changed? For example:
/**
* Created by JohnDoe on 2016-03-27.
* Last modified by JaneDoe on 2016-04-27
*/
It is not supported out of the box. I suggest you do not include information about author and last edit/create time in file at all.
The reason is that your version control system (Git, SVN) contains the same information automatically. So the manual labelling is just duplicate of already existing info, but is only more error prone and needs to be manually updated.
Here's a working solution similar to what I'm using. Tested on mac os.
Create a bash script which will replace first occurrence of Last modified by JaneDoe on $DATE only if the exact value is not contained in the file:
#!/bin/bash
FILE=src/java/test/Test.java
DATE=`date '+%Y-%m-%d'`
PREFIX="Last modified by JaneDoe on "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
sed -i '' "1,/$(echo "$STRING")/ s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
fi
Install File Watchers plugin.
Create a file watcher with appropriate scope (it may be this single file or any other scope, so that any change in project's source code will update modified date or version etc.) and put a path to your bash script into Program field.
Now every time the file changes the date will update. If you want to update date for each file separately, an argument $FilePath$ should be passed to the script.
This might have been just a comment to #oleg-mikhailov excellent idea, but the code snippet won't fit. Basically, I just tweaked his solution.
I needed a slightly different syntax but that's not the issue. The issue was that when the script ran automatically upon file save using the File Watchers plugin, if ran on a file which doesn't include PREFIX it would run over and over for ever.
I presume the that the issue is with the plugin itself, as it didn't happen when run from the shell, but I'm not sure why it happened.
Anyway, I ended up running the following script (as I said only a slight change with respect to the original). The new script also raises an error if the the prefix doesn't exist. For me this is a feature as Pycharm prompts me with the error, and I can fix the file.
Tested with PyCharm 2021.2.3 on macOS 11.6.
#!/bin/bash
FILE=$1
DATE=`date '+%Y-%m-%d'`
PREFIX="last_modified_date: "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
if grep -q "$PREFIX" "$FILE"; then
sed -i '' "s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
else
echo "Error!"
echo "'$PREFIX' doesn't appear in $FILE"
exit 1
fi
fi
PHPStorm has not a "hook" for launching task after detect a change in file (just for uploading in server yes). Code templating is based on the creation of file not change.
The behaviour you want (automatic change file after manual change file) can be useful for lot of things but it's circular headhache for editor. Because if you change a file it must change file (and if a file is change ? it change file ?).
However, You can, perhaps, "enable Live Templates" when you launch a "reformat code" which able to rewrite your begin template code that way rewrite date modification.
Other solution is that use a tools with as grunt but I don't know if manage php file.

Recursive rsync over ssh, include only one file extension

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.
I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.
OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

Accurev: How to keep/promote with a multi line comment from the command line?

How to keep/promote with a multi line comment from the accurev command line?
For example if I try:
accurev stat -n -fl | xargs accurev keep -c "git log 1234..4311"
I simple get the error:
You can not use non-printable characters on the command line: # On
branch master\x0a... AccuRev was unable to understand your command.
I can of course strip out the new lines but then the comment is not really useful.
AccuRev commands that take a -c option for a comment must currently be enclosed in quotes and have no line breaks.
As for the output from git log 1234..4311 that could be captured as a manifest file and kept with the other files.
Dave
I'm not sure about doing it directly from the command-line without any extra step, and I'm hesitant to try anything on my client's AccuRev setup. That said, according to the entry on accurev keep from the CLI manual:
–c <comment>
Specify a comment for the transaction. The next command-line argument should be
a quoted string. Alternatively, the next argument can be in the form
#<comment-file>, which uses the contents of text-file <comment-file> as the
comment.
Default: enter a comment interactively, using the text editor named in
environment variable EDITOR (or a system-dependent default editor).
Reading this, I see two ways you can do what you want from the command line (meaning, not using the GUI).
1.) Pipe or cat your stat info into file, the use the #file syntax to get it into your commit
2.) Get your stat into into your clipboard, then don't give an argument to the keep command, let your editor open up, paste, save, and close.
There may be a way to get this all done via CLI without these middle-steps (perhaps you need to format the \x0a into \r\n or something?), but as I said, I'm unwilling to try it on my AccuRev setup as AccuRev gives me (and everyone else) enough trouble as it is.
HTH

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out