Heroku image corruption - thin

When uploading an image through git to heroku, the image is corrupted when it gets sent from the web server. I've been trying to fix it by going g rm --cached images/contact-me.png and then g add images/contact-me.png again, and then pushing.
Also;
xyz#co-data:~/labs/exposeit-site$ sha1sum images/contact-me.png
2d319cd64e94afe7cdd169347653670a1dd82581 images/contact-me.png
xyz#co-data:~/labs/exposeit-site$ wget http://exposeit.herokuapp.com/images/contact-me.png
--2012-08-16 16:50:35-- http://exposeit.herokuapp.com/images/contact-me.png
Resolving exposeit.herokuapp.com (exposeit.herokuapp.com)... 50.19.121.246, 174.129.192.155, 184.73.155.93, ...
Connecting to exposeit.herokuapp.com (exposeit.herokuapp.com)|50.19.121.246|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1230115 (1.2M) [image/png]
Saving to: `contact-me.png'
100%[=================================================================================>] 1,230,115 963K/s in 1.2s
2012-08-16 16:50:36 (963 KB/s) - `contact-me.png' saved [1230115/1230115]
xyz#co-data:~/labs/exposeit-site$ sha1sum contact-me.png
74d97745d35bb67e5517611b683ed461bd0c1686 contact-me.png
and
xyz#co-data:~/labs/exposeit-site$ g ls-files | grep contact-me
images/contact-me.png
Is this a problem of Heroku's?
Update:
Procfile:
web: bundle exec thin start -R config.ru -e $RACK_ENV -p $PORT

So the answer is that the thin gem has a bug that takes string length instead of length of the byte array that is the underlying image. Why it would first be converted to a string, I don't know.
The solution is to add, to your Gemfile:
gem 'rack-jekyll', :git => 'https://github.com/adaoraul/rack-jekyll.git', :require => 'rack/jekyll'
...and it will be downloaded from github rather than RubyGems.

Weird. Still, you look okay to me. http://exposeit.herokuapp.com/images/contact-me.png
Is best to host images and static content elsewhere rather than put megabytes in source control https://devcenter.heroku.com/articles/s3

Related

CUB_200_2011 Dataset Download link Error COLAB

I am trying to download CUB_200_2011 dataset in colab using
!wget http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz
after running this i got
--2021-05-28 10:13:12-- http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz
Resolving www.vision.caltech.edu (www.vision.caltech.edu)... 34.208.54.77
Connecting to www.vision.caltech.edu (www.vision.caltech.edu)|34.208.54.77|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://drive.google.com/file/d/1hbzc_P1FuxMkcabkgn9ZKinBwW683j45/view [following]
--2021-05-28 10:13:12-- https://drive.google.com/file/d/1hbzc_P1FuxMkcabkgn9ZKinBwW683j45/view
Resolving drive.google.com (drive.google.com)... 74.125.195.102, 74.125.195.113, 74.125.195.138, ...
Connecting to drive.google.com (drive.google.com)|74.125.195.102|:443... connected.
HTTP request sent, awaiting response... 200 OK
**Length: unspecified [text/html]**
Saving to: ‘CUB_200_2011.tgz’
CUB_200_2011.tgz [ <=> ] 71.36K --.-KB/s in 0.03s
2021-05-28 10:13:13 (2.41 MB/s) - ‘CUB_200_2011.tgz’ saved [73069]
Length is unspecified and it says its an HTML file and cannot unrar it as i get an error.
!tar -xvzf CUB_200_2011.tgz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Is there anything wrong with the link or what is the problem?
See the message carefully, download URL leading to google drive folder in which it navigates in the confirmation page instead of initiating the download. The following command is prepared for your requirement, where you see configuring the download with Google Drive file id, setting CUB_200_2011.tgz as an output file, using cookies.txt file as specified by --keep-session-cookie to hold cookie information during the download, enabled auto-confirmation for the download, also skipping the certificate check by --no-check-certificate, and removeed cookies.txt at the end after the download is over.
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1hbzc_P1FuxMkcabkgn9ZKinBwW683j45' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1hbzc_P1FuxMkcabkgn9ZKinBwW683j45" -O CUB_200_2011.tgz && rm -rf /tmp/cookies.txt
Also, nothing wrong with your tar command, it should work properly when you complete the first command correctly. Hopefully, it resolves your issue.
It seems that the original authors redirected the dataset link to a google drive link (This broke tons of online tutorials) but a new public source of the data is provided by fast.ai and can be obtained in ipython session with the following line:
!wget https://s3.amazonaws.com/fast-ai-imageclas/CUB_200_2011.tgz

I m trying to integrate ldap with devstack and when i did ./stack.sh i got this localrc: line 9: KEYSTONE_IDENTITY_BACKEND: command not found

localrc file
ADMIN_PASSWORD=password2 MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2 SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes LDAP_PASSWORD=9632
I followed this website(http://www.ibm.com/developerworks/cloud/library/cl-ldap-keystone/)
I am assuming the above snippet is from a file written in shell script. Your example looks Ok.
I checked the link you provided and noted that the line you say failed is written in the IBM example as:
KEYSTONE_IDENTITY_BACKEND = ldap
Which is not legal sh (or bash) and would cause the error message you described.
KEYSTONE_IDENTITY_BACKEND = ldap
-bash: KEYSTONE_IDENTITY_BACKEND: command not found
I suspect you copied and pasted the bad example from the link into your localrc file, which caused the error you saw, but somehow when you wrote the SO question, you corrected the mistake by removing the spaces around the "=".
Edit: Investigation
;TLDR
Create a file in the root of the devstack repo, devstack/local.conf with the contents:
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
Full Description
I installed devstack on Centos7 (using the Devstack Quick Start Guide):
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
./stack.sh
I entered passwords as prompted, but eventually it failed with the error:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I traced the problem to a limited PATH in the sudoers entry, and because my postgreSQL install is in a non-standard location, I linked pg_config into /usr/local/bin and ran stack.sh again:
sudo ln -s /usr/pgsql-9.3/bin/pg_config /usr/local/bin/pg_config
./stack.sh
(You probably won't have to do this if Postgres is in a standard location).
Install took a long time -
This is your host IP address: 192.168.200.181
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.200.181/dashboard
Keystone is serving at http://192.168.200.181/identity/
The default users are: admin and demo
The password: 12345678
2016-07-17 18:16:32.834 | WARNING:
2016-07-17 18:16:32.834 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-07-17 18:16:32.834 | stack.sh completed in 1447 seconds.
I killed the devstack session and did it all again with a clean git repo and with a localrc file.
./unstack.sh
cd ..
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
cat << __EOF > local.conf
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
__EOF
./stack.sh
This time there were no password prompts, so the local config was definitely read.

Deploy rails application after git push

I want to deploy my application on remote test server using capistrano gem.
Both git and rails should run on same server.
I have 2 users 'git' for git repositories and 'rails' with installed rvm. After git push i want to execute hook post-receive which runs su rails and then cap deploy.
When i tried push i got message:
remote: su: must be run from a terminal
How can i work around this message. Can i enable tty some way over git ssh connection?
I can give up from capistrano for this case but still i want rvm and rails to be used only by user rails (so su probably have to be used in each case).
edit
Now i walk around problem. Probably this is very bad solution but works ;). From bellow script's i removed original paths and echo's.
post-receive hook before walkaround
#!/bin/bash
while read oldrev newrev ref
do
su rails #here script fails
cd /path/to/rails/app/current/ && cap deploy
done
post-recive now
#!/bin/bash
while read oldrev newrev ref
do
ssh rails#localhost '/path/to/scripts/deploy.sh'
done
deploy.sh script
#!/bin/bash
CAP_DIR="/path/to/capistrano/dir"
RUBY="1.9.3-p194"
GEMSET="gemset_name"
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
rvm use $RUBY
rvm gemset use $GEMSET
cd $CAP_DIR
cap deploy

Refinery error when pushing to Heroku

I've a Refinery app, works great locally.
Created a bamboo stack on Heroku.
When I try to push I can see this:
Preparing app for Rails asset pipeline
Running: rake assets:precompile
rake aborted!
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Then I open it up in browser:
"We're sorry, but something went wrong."
$ heroku logs
Rendered vendor/bundle/ruby/1.9.1/gems/refinerycms-authentication-2.0.2/app/views/refinery/users/new.html.erb within refinery/layouts/login (82.3ms)
2012-03-15T14:43:25+00:00 app[web.1]: Completed 500 Internal Server Error in 1269ms
full output is here
Any help is great, thanks!
+++
Update:
Updated the stack to Cedar and made Ruby env 1.9.3
$ heroku config
DATABASE_URL => ..
GEM_PATH => vendor/bundle/ruby/1.9.1
LANG => en_US.UTF-8
PATH => bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin
RACK_ENV => production
RAILS_ENV => production
RUBY_VERSION => ruby-1.9.3-p0
SHARED_DATABASE_URL => ..
$ heroku info --app mimacohuoncedar
=== mimacohuoncedar
Addons: Basic Logging, Shared Database 5MB
Database Size: (empty)
Git URL: git#heroku.com:mimacohuoncedar.git
Owner: ..
Repo Size: 9M
Slug Size: 19M
Stack: cedar
Web URL: http://mimacohuoncedar.herokuapp.com/
$ heroku logs now shows this:
this-updated
Where to go on? Thanks
Dont know if you managed to fix this but I ran into the same issue using the Cedar stack. Found this article on Heroku that seemed to do the trick for me. Ran the line in terminal and it pushed first time.
I am seeing this same error, and the accepted answer did not solve it for me;
This blog however, did the trick. The blog title refers to Rails 3.2, but I'm on 3.1 and was seeing this same error.
The blog recommended adding this line to application.rb.
config.assets.initialize_on_precompile = false
The meaning, as summarized from the article;
This option prevents the Rails environment from being loaded when the assets:precompile task is executed. Because Heroku precompiles assets before setting the database configuration, you need to set this configuration to false or you Rails application will try to connect to an unexisting database.
Added the line and pushed, everything seems good now.
That output looks suspiciously like Cedar stack and not Bamboo - give http://devcenter.heroku.com/articles/labs-user-env-compile a go. That should sort you out.

Upload with paperclip very slow (unicorn)

Sitting here with a simple rails 3 app in which I have a simple Gallery model and each gallery has many images. The image model is extended with paperclip and with the following options
has_attached_file :local,
:styles => {
:large => "800x800>",
:medium => "300x300>",
:thumb => "100x100#",
:small => "60x60#"
}
In my galleries_controller I have the following action that is implemented in order to work with the jQuery-File-Upload plugin. thereby the json response.
def add_image
gallery = Gallery.find params[:id]
image = gallery.images.new({:local => params[:local]})
if image.save
render :json => {:thumb => image.url(:thumb), :original => image.url}
else
render :json => { :result => 'error'}
end
end
To me this is fairly straight forward. But here comes the issue. In Development under mongrel any kind of upload works just fine with about 500-1000ms/upload.
However when I push it in to production I constantly get timeouts of my unicorn workers and when it does send an image through it takes anywhere from 30-55 seconds for one file.
the files I upload are around 100k in size
I have done some testing of the bandwidth between my VPS and my dev computer witH ipref and got an average speed of about 77kbps so the upload should not be a problem.
Note I also did a test with a non ajax file upload using the same app with user model that has an avatar.
Development => Completed 302 Found in 693ms
Production => Completed 302 Found in 21618ms
Anyone experienced a similar issue with (rails3, unicorn) file uploads?
So after digging around I managed to determine that on my VPS it was the OpenMP Option in ImageMagick that was causing the very slow operation. So my first attempt was to rebuild the native Ubuntu 10.04 package with the --disable-openmp flag added. This failed for some reason and while I am not sure why the package kept comming out with openMP still active. My current solution is now instead to backport ImageMagick from Ubuntu 10.10. Below follows the steps I took:
Step 1 download the following files:
imagemagick_6.6.2.6-1ubuntu1.1.dsc
imagemagick_6.6.2.6.orig.tar.bz2
imagemagick_6.6.2.6-1ubuntu1.1.debian.tar.bz2
from here
Step 2 unpack the package
$ dpkg-source -x imagemagick_6.6.2.6-1ubuntu1.1.dsc
Step 3 edit the rules
$ cd imagemagick-6.6.2.6
$ vim debian/rules
Add the the follwing line to the ./configure statment on line 25-39. I added mine on line 34.
34: --disable-openmp \
Step 4 add dependencies and build ( I needed these dependencies)
$ sudo apt-get install liblqr-1-0-dev librsvg2-dev
$ dpkg-buildpackage -b
Step 5 Out with the old, in with the new
$ sudo apt-get remove --purge imagemagick
$ sudo dpkg -i libmagickcore3_6.6.2.6-1ubuntu1.1_amd64.deb
$ sudo dpkg -i libmagickwand3_6.6.2.6-1ubuntu1.1_amd64.deb
$ sudo dpkg -i imagemagick_6.6.2.6-1ubuntu1.1_amd64.deb
Step 6 Once again have fast image conversions
_before_ (with openmp)
$ time utilities/convert 'image.jpg' -resize "x60" -crop "60x60+10+0" +repage 'thumb'
real 0m11.602s
user 0m11.414s
sys 0m0.069s
_after_
$ time utilities/convert 'image.jpg' -resize "x60" -crop "60x60+10+0" +repage 'thumb'
real 0m0.077s
user 0m0.058s
sys 0m0.019s
If processing takes a long time, consider processing the thumbnails in a separate workers.
Request: accept file; save it to disk; post job to queue
Worker: pop job from queue; create thumbnails; repeat
Delayed::Job and Resque are great solutions for this.