I have an application where i am uploading an image using paperclip and paperclip-cloudfiles gems to upload image on rackspace.
This functionality takes about minute to upload image on rackspace through localhost.
And on heroku gives an application error.
code is build up by using following guidelines.
http://blog.joshsoftware.com/2010/04/16/using-rackspace-cloudfiles-with-paperclip/
got following error on heroku
2013-03-22T14:49:02+00:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=POST path=/en/people/dC95vKJ7mr4OadeJe5kdxp/update_avatar host=foodswap.herokuapp.com fwd="14.97.68.176" dyno=web.1 queue=0ms wait=0ms connect=1ms service=30950ms status=503 bytes=0
2013-03-22T14:49:02+00:00 app[web.1]: Disconnected from ActiveRecord
2013-03-22T14:49:02+00:00 app[web.1]: reaped # worker=0
2013-03-22T14:49:03+00:00 app[web.1]: Connected to ActiveRecord
2013-03-22T14:49:03+00:00 app[web.1]: worker=0 ready
Using:
ruby 1.9.3
rails 3.0.19
paperclip 3.4.1
paperclip-cloudfiles 2.3.8.3
So how i can reduce time for image processing or extend server time?
or delayed job will help me out to upload image?how?
Heroku times out requests that take longer that 30seconds - which is always a problem with uploads.
Using Amazon S3 you can direct upload without going through Heroku and have it pass a response to Heroku once the upload has completed thereby entirely bypassing Heroku's timeout - you would need to look to see if Rackspace offer such functionality.
Related
I am running logstash on kubernetes and using s3 input plugin to read logs from S3 and send it to elasticsearch. Pod is being crashloopback with the below error. Can someone help me with this issue?
error:
[2020-05-06T05:20:53,995][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>42, "name"=>"[main]<s3", "current_call"=>"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/protocol.rb:181:in `wait_readable'"}]
{"thread_id"=>41, "name"=>"[main]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-05-06T05:20:48,651][ERROR][org.logstash.execution.ShutdownWatcherExt]
Here is the details of my setup:
Gitlab version: 5.2
Operating System: Centos 6.3
I am Importing an existing repository while creating a new project(/projects/new) .
A new EMPTY project is created however no repository is imported and no error message comes up. While I was digging around I found this in my puma.stderr.log.
=== puma startup: 2013-06-13 15:37:53 +0530 ===
error: Couldn't resolve host 'github.com' while accessing https://github.com/<username>/tester.git/info/refs
This GitLab installation is behind a HTTP proxy server and it seems like the proxy settings aren’t configured. But, my /etc/profile.d/ has a script that sets the proxy system wide with variables http_proxy and https_proxy.
On further investigation I checked whether the problem could be with gitlab-shell unable to reach out to the URL via the proxy and I tried the following.
$./bin/gitlab-projects import-project xxx/tester_test_test.git https://github.com/<username>/tester.git
This seems to work perfectly.
It seems like Puma does not use this gitlab-shell and is trying to reach out to the URL. Which leads me to the question, how do I tell PUMA about my proxy server?
Following is my production.log
Started GET "/favicon.ico" for 170.95.35.204 at 2013-06-13 16:20:00 +0530
Processing by ProjectsController#show as HTML
Parameters: {"id"=>"favicon.ico"}
Rendered public/404.html (0.0ms)
Filter chain halted as :project rendered or redirected
Completed 404 Not Found in 5ms (Views: 0.7ms | ActiveRecord: 0.7ms)
Started GET "/projects/new" for 170.95.35.204 at 2013-06-13 16:20:04 +0530
Processing by ProjectsController#new as HTML
Rendered projects/_errors.html.haml (0.1ms)
Rendered projects/new.html.haml within layouts/application (4.1ms)
Rendered layouts/_head.html.haml (0.7ms)
Rendered layouts/_search.html.haml (57.5ms)
Rendered layouts/_head_panel.html.haml (60.4ms)
Rendered layouts/_flash.html.haml (0.1ms)
Rendered layouts/nav/_dashboard.html.haml (3.0ms)
Completed 200 OK in 72ms (Views: 65.8ms | ActiveRecord: 4.0ms)``
When encountering gitlab import problems, this is a general workaround.
Create a bare project in gitlab
Add it as a remote in an existing cloned [and up to date] repo. You can git fetch from your current origin to bring it up to date.
git remote add mygitlab [bare project URL from #1]
Push the existing repo state to the new remote
git push mygitlab --all
You may want to check some of the various recipes here if you have a problem getting everything to push: Push local Git repo to new remote including all branches and tags
I myself just edit .git/config by hand. Take the mygitlab URL and make it the origin url. You can just delete the mygitlab block, or via command line:
git remote remove mygitlab
This is easier than banging rocks together trying to get gitlab to import something it doesn't want to [for me, it's bitbucket right now].
I'm using a SQlite3 database in development and a PostgreSQL database in production (Heroku). I'm running into some issues that may or may not be directly related to the PG database, but I'd like to know what I need to be leery of and what differences there are between the two.
For instance, are there certain things (be it syntax or anything else) that don't work with a PG database, but do with a SQlite3 one?
Does this block of errors from the log have anything to do with the PG database?
2012-12-30T20:27:15+00:00 heroku[router]: at=info method=POST path=/books host=fast-journey-7822.herokuapp.com fwd=71.7.18.2 dyno=web.1 queue=0 wait=7ms connect=8ms service=30ms status=500 bytes=643
2012-12-30T20:27:15+00:00 app[web.1]: Started POST "/books" for 71.7.18.2 at 2012-12-30 20:27:15 +0000
2012-12-30T20:27:15+00:00 app[web.1]: Processing by BooksController#create as HTML
2012-12-30T20:27:15+00:00 app[web.1]: Parameters: {"utf8"=>"✓", "authenticity_token"=>"yXWQ/0j0AbCJ8Ytw3p7kvL0qgYFe0LTfSevhLChzk94=", "book"=>{"user_id"=>"1", "status"=>"f", "queued"=>"f", "title"=>"", "author"=>""}, "commit"=>""}
2012-12-30T20:27:15+00:00 app[web.1]: Completed 500 Internal Server Error in 1ms
If you run the same database in all environments then this issue is irrelevant.
I am getting a 404 error when trying to push my database to Heroku via Taps
(1.9.2#[app_name]_db) heroku db:push --app [app_name]
Loaded Taps v0.3.24
Auto-detected local database: sqlite://db/development.sqlite3
Warning: Data in the app '[app-name]' will be overwritten and will not be recoverable.
! WARNING: Destructive Action
! This command will affect the app: [app-name]
! To proceed, type "[app-name]" or re-run this command with --confirm [app-name]
> [app-name]
Sending schema
Schema: 0% | | ETA: --:--:--
Saving session to push_201209251425.dat..
!!! Caught Server Exception
HTTP CODE: 404
The db:push command used to work fine, then I made some changes to my database by rolling back the migrations, editing them, and then re-migrating. Now I can deploy the app just fine, but the database will not push -- I don't know if this is related to editing the migrations or not.
The app works fine on my machine, and I wanted to eliminate any discrepancies between Heroku's copy and my own, so I created a new app and pushed to that. Same thing: the Heroku app works but will not receive db:push; it errors out with the same 404 above.
Is this a Heroku service temporarily down, or has changing my app caused the 404?
Edit: heroku logs do not show any error message
Heroku support was taking too long to respond, so I found a workaround that communicates with my EC2 instance directly by using the Taps gem.
Go to Heroku dashboard for your database. For me this was at
https://postgres.heroku.com/databases/[my-database-name]
though I navigated by going through Addons.
Click on 'URL' in 'Connection Settings', should give you something like
postgres://[username]:[password]#ec2-[ip_address_numbers].compute-1.amazonaws.com:[port]/[database_name]
Copy this value down, I'll reference it here as [EC2_URL]
Get Taps installed on 1.9.2 gemset if you don't already have it (not sure if 1.9.3 will work, didn't test it)
Set up localhost taps server to facilitate transaction by running in terminal:
taps server postgres://[local_machine_username]#localhost/[name_from_database.yml] [some_username] [some_password]
(note the spaces before username and password)
Then you can process the transaction yourself through another terminal window:
taps pull [EC2_URL] http://[some_username]:[some_password]#localhost:5000
It should run and pull all your data from the local development db to the Amazon instance. You can also do vice versa, or choose a different database, etc. Or not, I'm not a cop.
There are some problems with heroku db commands and ruby 1.9.2 (I have this version).
db:pull ends with "Unable to fetch tables information from"
db:push ends with "!!! Caught Server Exception HTTP CODE: 404"
There is a work-around for this problem. Switch to ruby 1.8.7 (I am using rvm for this) for a moment just to do db operations on heroku and after finish switch ruby back.
I do the same process (have Heroku convert my sqlite database to Postgres), and I was getting this problem yesterday as well. It seems to be working now, so I'm believe it was an issue with Heroku.
I have built a web app at myapp.heroku.com where "myapp" is actually the random name generated by heroku. When I hit it with my web browser it works. When I hit it with Ruby Rest-Client (gem rest-client v1.6.3) as follows:
irb(main):024:0> response=RestClient.get "http://myapp.heroku.com"
It craps out with the following:
RestClient::InternalServerError: 500 Internal Server Error
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client- 1.6.3/lib/restclient/abstract_response.rb:48:in `return!'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient/request.rb:228:in `process_result'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient/request.rb:176:in `block in transmit'
from C:/Ruby192/lib/ruby/1.9.1/net/http.rb:627:in `start'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient/request.rb:170:in `transmit'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient/request.rb:64:in `execute'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient/request.rb:33:in `execute'
from C:/Ruby192/lib/ruby/gems/1.9.1/gems/rest-client-1.6.3/lib/restclient.rb:68:in `get'
from (irb):24
from C:/Ruby192/bin/irb:12:in `<main>'
When I use the same client with better known URLs, such as "http://www.google.com", or "http://www.heroku.com", it works fine, the content of the URL downloads, and everything is good. When I use the same client with a version of the application running at "http://localhost:3000", it works fine too.
Am I missing something in my rest-client client which prevents it from GETting data from an app hosted at heroku.com?
============ EDIT ==== additional info ===========
After sleeping on it, I tried:
irb> require 'net/http'
irb> NET::HTTP.get_print URI.parse "http://myapp.heroku.com"
It worked fine.
There could be a number of reasons here;
If you are using a single dyno, it goes out to make the request but then it comes back to the site but there isn't a dyno available to process the request so it times out.
There can be situations where a server is unable to talk back to itself on it's external address but I don't think this is likely to be the case, more likely a dyno issue perhaps?
John.
I goofed.
Upon checking "heroku logs", I found an exception in my code which caused the 500 error. The exception was because of a type casting difference between the PG database in heroku, and my local SQLite3 database. Once I fixed it, everything went well. I suspect that the 500 error also caused my site to be temporarily unaccessible, which is why the subdomain on which my site is hosted did not return anything.
Moral of the story? rest-client works. Heroku works. Rest-client (IMO) shouldn't throw exceptions for a 500, but just return the HTTP response code and let the app deal with it. However, if I whine more, I'll also have to fork & fix...