I am logging some errors by doing this in my Rails app:
logger.error "MY ERROR STRING"
How can I look in heroku logs to find that line?
I know how to do heroku logs --tail and heroku logs -n 500 ,but how can I find a specific string in the logs?
Using heroku logs -t | grep "term" is great for filtering current log events. However, Heroku retains a max of 1500 lines of log output, so even if you do heroku logs -n 1500 -t you won't see log events that occurred prior to that log window.
The better approach is to use a logging add-on like Papertrail that retains your logs for a greater period of time and makes your full log history searchable.
Of course, it also possible that the log event you think is triggering in your code isn't.
Try putting | grep "search term" after your heroku logs -n 500 statement
so something like: heroku logs -n 500 | grep "MY ERROR STRING"
For a windows users,
heroku logs -n 100 -a appname | find "wordToFind"
you can change 100 to other number from 0 to 1500, that is the last 1500 logs from you heroku application
Related
I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!
Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?
I am trying to join two tables on azure sql db from my local ubuntu machine. One of the tables has around 300M entries, so it will take some time to run a query. But whenever I run the query like this,
sqlcmd -S *server* -d *DB* -U *User* -P *Password*
-l 600 -t 600 -Q *Query* -s ',' -o *output_file*
-W -w 1000 -C -M
It gives the same error at different points whenever I run it.
This is the error I am getting,
Sqlcmd: Error: Internal error at ReadAndHandleColumnData (Reason: Error reading column data).
SSL Provider: [error:80001044:lib(128):func(1):internal error:unexpected error]
Communication link failure
At first, I thought it's a timeout issue, so I increased query timeout and server timeout to 10 minutes. But it doesn't wait for 10 minutes, it throws the error before that. Can someone help please?
I just got the same error message, but when I ran the query a second time, it worked...
¯\_(ツ)_/¯
Seems to be an internal SQL Server/sqlcmd issue over which users have very little control.
Starting today (or possibly a few days ago), the SoftLayer_Billing_Item::cancelService API has stopped working for File Storage type NAS. It still works fine for block type storage.
Here is the output I get:
[chrisr#ratsy ~]$ curl -k -u chrisr1:xxxxxxxx https://api.softlayer.com/rest/v3/SoftLayer_Billing_Item/129162879/cancelService.json
{"error":"This cancellation could not be processed please contact support.This cancellation could not be processed please contact support. You cannot cancel the selected billing items immediately. Please choose your next billing anniversary date for cancellation date.","code":"SoftLayer_Exception_Public"}
This was working previously, but I tried using the SoftLayer_Billing_Item::cancelServiceOnAnniversaryDate api to see if it would work, but it didn't.
[chrisr#ratsy ~]$ curl -k -u chrisr1:xxxxxxxxxxxx https://api.softlayer.com/rest/v3/SoftLayer_Billing_Item/129162879/cancelServiceOnAnniversaryDate.json
{"error":"This type of service cannot be cancelled through this method. Please use cancelService()","code":"SoftLayer_Exception"}
Could you try with this method please:
SoftLayer_Billing_Item::cancelItem
e.g:
curl -X -k POST -d '{"parameters":[true, true, "No longer needed", "Cancel"]}' -u $username:$apiKey https://api.softlayer.com/rest/v3/SoftLayer_Billing_Item/$billingItemId/cancelItem.json
Replace: $username, $apiKey and $billingItemId with your own information.
In our Gerrit instance (2.10), we're getting a random error (in 1 of 10 executions) while executing a command review
bash-4.1$ ssh -p 12345 gerrit#gerrit.foo.int gerrit review --label Verified=0 --message '"Build started."' 2458,2
error: Cannot post review
Any suggestions what might be wrong?
Looking at the source code of Gerrit, I can see this message is associated with RestApiException. Unfortunately, there is not a single log record in the logs directory containing this exception or the Cannot post review error.
Not sure how to increase the log level as logging is not there yet (my assumption):
bash-4.1$ ssh -p 12345 gerrit#gerrit.foo.int gerrit logging set-level
gerrit: logging: not found
Any help would be appreciated.
This problem and its solution is described by
GWT ORM OrmConcurrencyException: Concurrent modification detected - find the cause
The real problem is that gerrit review is not resistant to simultaneous updates of a label (e.g., --label A= and --label B=).
This was also reported as https://code.google.com/p/gerrit/issues/detail?id=3730&thanks=3730&ts=1450711628.
I'm trying to import one million lines of redis commands, using the --pipe feature.
redis_version:2.8.1
cat file.txt | redis-cli --pipe
This results in the following error:
Error reading from the server: Connection reset by peer
Does anyone know what I'm doing wrong?
file.txt contains, for example,
lpush name joe
lpush name bob
edit: I now see there's probably a special format(?) for using pipe mode - http://redis.io/topics/protocol
The first point is that the parameters have to be double-quoted. The documentation is somewhat misleading on this point.
So a working syntax is :
lpush "name" "joe"
lpush "name" "bob"
The second point is that each line has to end by an \r\n and not just by \n. To fix that point, you just have to convert your file with the command unix2dos
like : unix2dos file.txt
Then you can import your file using cat file.txt | src/redis-cli --pipe
This worked for me.
To use the pipe mode (a.k.a bulk loading, or mass insertion) you must indeed provide your commands directly in Redis protocol format.
The corresponding Redis protocol for LPUSH name joe is:
*3
$5
LPUSH
$4
name
$3
joe
Or as a quoted string: "*3\r\n$5\r\nLPUSH\r\n$4\r\nname\r\n$3\r\njoe\r\n".
This is what your input file must contain.
Redis documentation includes a Ruby sample to help you generate the protocol: see gen_redis_proto.
A Python sample is available e.g. in the redis-tools package.
There are existing tools that convert client commands directly to redis wire protocol messages. Example:
redis-mass my-client-script.txt | redis-cli --pipe option
https://golanglibs.com/dig_in/redis-mass
https://github.com/almeida/redis-mass
There are two kinds of possibilities.
First check point is exceed of maxclients limits.
You can check using 'info clients' and 'config get maxclients' redis command.
In my desktop result is below.
127.0.0.1:6379> info clients
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
127.0.0.1:6379> config get maxclients
1) "maxclients"
2) "2"
and then i tried to use pipe command, below is result.
[localhost redis-2.8.1]$ cat test.txt | ./src/redis-cli --pipe
All data transferred. Waiting for the last reply...
Error reading from the server: Connection reset by peer
If that result is same. you have to change redis.conf file.
Seconds check point is ulimit option.
ulimit option change needs a root privilige. check below link.
How do I change the number of open files limit in Linux?
This error happens because the timeout set in Redis is Default, 0. You need to configure this timeout value by redis-cli using the command below:
To connect in redis server:
redis-cli -h -p -a
To view timeout value configured:
this command-line: config get timemout, Works to see what is the timeout value was configured in Redis server.
To Set new value for redis timeout:
this command-line: config set timeout 120, Set the timeout to 2 minutes. So, you need to set the redis timeout so long your execution need.
I hope this answers help you. Cyu!!!
You can use the following command to import your file's data to redis
cat file.txt | xargs -L1 redis-cli