gsutil ls returns error: "contains wildcard" - gsutil

For some reason, we've got a folder which causes gsutil ls to error:
$ gsutil ls -lR gs://mybucket/proj103
...
...
...
gs://mybucket/proj103/delivery/161025_To_Viewport/app_icon/:
39219977 2016-11-17T10:44:08Z gs://mybucket/proj103/delivery/161025_To_Viewport/app_icon/App Ikon.psd
CommandException: Cloud folder gs://mybucket/proj103/delivery/161025_To_Viewport/app_icon/Client - VR [Squared]/ contains a wildcard; gsutil does not currently support objects with wildcards in their name.
When I look in the network share (from my Windows machine) from which the files originate (we upload them to the bucket nightly vi gsutil rsync) I see this:
Directory: \\10.1.1.100\prod\proj103\delivery\161025_To_Viewport\app_icon
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/25/2016 6:18 PM Client - VR [Squared]
-a---- 10/25/2016 5:29 PM 39219977 App Ikon.psd
Are those brackets causing some kind of issue?
I'm on gsutil version 4.22.

In addition to the answer by #mhouglum (thanks!) I'd like to add that there's a workaround:
gsutil ls -lR gs://mybucket/proj103/**
This workaround was suggested, also by #mhouglum, here.

The short answer is: yes, unfortunately the brackets are what's causing the issue here.
This is a current limitation in gsutil, and it's being tracked in a GitHub issue (#290). I've added a reference to your Stack Overflow post there.

Related

gsutil: specify project on copy

I'm attempting to come up with commands to facilitate deployment to different environments (production, staging) in my GCP project using gsutil.
The following deploys to production without issue:
gsutil cp -r ./build/* gs://<production-project-name>/
I'd like to deploy to a bucket in another project. The gsutil help page alludes to a -p option for ls and mb used to change the project context of the gsutil command.
I'd like to use a command like this to deploy my app to a staging environment:
gsutil cp -r ./build/* gs://<existing-bucket-in-staging-project>/ -p <staging-project-name>
Alas, the -p option is not available for the cp command. I confirmed on the gsutil cp doc page.
What is the best way to deploy a build artifact to a Google Cloud storage bucket to a bucket in a project other than the one currently specified in the terminal environment?
The bucket namespace is global, so as long as the credentials you're using have permission to the other project, you shouldn't need a project parameter with the cp command. In other words, this command should work fine:
gsutil cp -r ./build/* gs://<bucket-in-staging-project>

SSH opening file error - no idea why

Running Debian Linux - newest version.
cp /included/filename /usr/bin/
It gives me error "cannot stat '/included/filename': No such file or directory
I don't get why there should be an error. I am doing it as superuser.
From your latest comment i conclude you got the paths mixed up. If you want to copy the file install.sh located under /usr/bin/included/ you would need to do
cp /usr/bin/included/install.sh /usr/bin/
to make something similar to your provided command work, id assume you are in /usr/bin and the first argument needs to be a relative one
cd /usr/bin
cp ./included/install.sh /usr/bin/
Please provide more information on what you are trying to do and provide realworld example code.

I m trying to integrate ldap with devstack and when i did ./stack.sh i got this localrc: line 9: KEYSTONE_IDENTITY_BACKEND: command not found

localrc file
ADMIN_PASSWORD=password2 MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2 SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes LDAP_PASSWORD=9632
I followed this website(http://www.ibm.com/developerworks/cloud/library/cl-ldap-keystone/)
I am assuming the above snippet is from a file written in shell script. Your example looks Ok.
I checked the link you provided and noted that the line you say failed is written in the IBM example as:
KEYSTONE_IDENTITY_BACKEND = ldap
Which is not legal sh (or bash) and would cause the error message you described.
KEYSTONE_IDENTITY_BACKEND = ldap
-bash: KEYSTONE_IDENTITY_BACKEND: command not found
I suspect you copied and pasted the bad example from the link into your localrc file, which caused the error you saw, but somehow when you wrote the SO question, you corrected the mistake by removing the spaces around the "=".
Edit: Investigation
;TLDR
Create a file in the root of the devstack repo, devstack/local.conf with the contents:
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
Full Description
I installed devstack on Centos7 (using the Devstack Quick Start Guide):
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
./stack.sh
I entered passwords as prompted, but eventually it failed with the error:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I traced the problem to a limited PATH in the sudoers entry, and because my postgreSQL install is in a non-standard location, I linked pg_config into /usr/local/bin and ran stack.sh again:
sudo ln -s /usr/pgsql-9.3/bin/pg_config /usr/local/bin/pg_config
./stack.sh
(You probably won't have to do this if Postgres is in a standard location).
Install took a long time -
This is your host IP address: 192.168.200.181
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.200.181/dashboard
Keystone is serving at http://192.168.200.181/identity/
The default users are: admin and demo
The password: 12345678
2016-07-17 18:16:32.834 | WARNING:
2016-07-17 18:16:32.834 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-07-17 18:16:32.834 | stack.sh completed in 1447 seconds.
I killed the devstack session and did it all again with a clean git repo and with a localrc file.
./unstack.sh
cd ..
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
cat << __EOF > local.conf
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
__EOF
./stack.sh
This time there were no password prompts, so the local config was definitely read.

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1

Download using gsutil

I was using gsutil to download a trace file from google storage.
The command I used was:
gsutil/gsutil cp gs://clusterdata-2011-1/task_usage/part-00499-of-00500.csv.gz ./
But I got an error:
GSReponseError: Status=404, code=NoSuchKey, reason=Not Found.
However I used ls command in gsutil and the file existed.
Any suggestion is appreciated.
It works finally. The reason may be gsutil version or that the last time the server wasn't working.