In any medium/high concurrence Rails production environment you usually see the logs are messed up between requests. That means that any amount of consecutive log lines are not corresponding to the same request but to several ones.
Any trick, gem, unix tool to take a Rails.log file and sort it up so all the same request log lines will be shown in consecutive order?
I'm not looking for the Rails.log file to be sorted in real time, I want to sort an already closed Rails.log file.
You can use the log_runes gem for this. It uses the Rails tagged logger to put a compact signature of the session id and request id on each log line so that it's easy to extract the log output for a session or a request with grep.
Related
Anyone can help solve this assessment?
Using JMeter framework (https://jmeter.apache.org) please implement a load test script:
The script should send 10 concurrent requests to Capital API: https://restcountries.eu/rest/v2/capital/?fields=name;capital;currencies;latlng;regionalBlocs
The script should read the capital values from a CSV file (contains 10 capital names)
The script should perform a status code verification for the transaction response
The script should run for 2 minutes.
The script should contain at least 2 listeners.
My suggestion is: create the script on your own. It's best way to learn any subject. Contributors to this forum will be more that happy to answer any specific questions if you get stuck in your quest.
The script should send 10 concurrent requests - concurrency is defined at Thread Group level
Requests are configured using HTTP Request sampler
Values can be read from the CSV file using CSV Data Set Config
Status code verification is being more or less automatically done by JMeter, it treats status codes below 400 as successful, additionally you can use Response Assertion for this
It's not recommended to use Listeners at all
We've implemented a SMAPI service and are attempting to serve up an audiobook. We can select the audiobook and start playback, but we run into issues when we want to move between chapters because our audio files are not split by chapter. Each audiobook is divided into roughly equal-length parts, and we have information on which part and how far into the part each chapter starts.
So we've run into an issue where our getMetadata response is giving back the chapters of the audiobook because that's how we'd like a user to be able to navigate the book, but our getMediaURI responses for each chapter are giving back URLs for the parts the audio files are divided into, and we seem to be unable to start at a specific position in those files.
Our first attempt to resolve the issue was to include positionInformation in our getMediaURI response. That would still leave us with an issue of ending a chapter at the appropriate place, but might allow us to start at the appropriate place. But according to the Sonos docs, you're not meant to include position information for individual audiobook chapters, and it seems to be ignored.
Our second thought, and possibly a better solution, was to use the httpHeaders section of the getMediaURI response to set a Range header for only the section of the file that corresponds to the chapter. But Sonos appears to have issues with us setting a Range header, and seems to either ignore our header or break when we try to play a chapter. We assume this is because Sonos is trying to set its own Range headers.
Our current thought is that we might be able to pass the media URLs through some sort of proxy, adjusting the Sonos Range header by adding an offset to the start and end values based on where the chapter starts in the audio file.
So right now we return <fileUrl> from getMediaURI and Sonos sends a request like this:
<fileUrl>
Range: bytes=100-200
Instead we would return <proxyUrl>?url=<urlEncodedFileUrl>&offset=3000 from getMediaURI. Sonos would send something like this:
<proxyUrl>?url=<htmlEncodedFileUrl>&offset=3000
Range: bytes=100-200
And the proxy would redirect to something like this:
<fileUrl>
Range: bytes=3100-3200
Has anyone else dealt with audio files that don't match up one-to-one with their chapters? How did you deal with it?
The simple answer is that Sonos players respect the duration of the file, not the duration expressed in the metadata. You can't get around this with positionInformation or Cloud Queues.
However, the note that you shouldn't use positonInformation for chapters in an audiobook seems incorrect, so I removed it. The Saving and Resuming documentation states that you should include it if a user is resuming listening. You could use this to start playback at a specific position in the audio file. Did you receive an error when you attempted to do this?
Note that you would not be able to stop playback within the file (for example, if a chapter ended before the file ended). The player would play the entire file before stopping. The metadata would also not change until the end of the file. So, for example, if the metadata for the file is "Chapter 2" and chapter 2 ends before the end of the file, the Sonos app would still display "Chapter 2" until the end of the file.
Also note that the reporting APIs have been deprecated. See Add Reporting for the new reporting endpoint that your service should host.
I'm a novice in JMeter's world and I'm trying to get graphs with only the data used in the test, no JMeter's metrics needed.
My test case consists in many sensors sending information to a central point, which has to process this info and send a response to a consumer.
The group of sensor are a group of threads where every single sensor has it's own csv data file. The consumer is an AMQP Consumer.
I would like to save in cvs files the next:
One file for the information sent by the every sensor with the timestamp(one file->one sensor).
One file containing all consumer's responses.
By now, I have mess with Aggregated Report and sample_variables declared in user.properties file. In this way, Jmeter includes the variables declared in user.properties in every report.
Does JMeter fits for my needs?
You can precisely control what JMeter stores in .jtl results file by amending relevant Results File Configuration, for example the following entries in user.properties file will suppress all JMeter metrics and leave only timestamps:
jmeter.save.saveservice.assertion_results_failure_message=false
jmeter.save.saveservice.data_type=falsejmeter.save.saveservice.label=false
jmeter.save.saveservice.response_code=false
jmeter.save.saveservice.response_message=false
jmeter.save.saveservice.successful=false
jmeter.save.saveservice.thread_name=false
jmeter.save.saveservice.time=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=false
jmeter.save.saveservice.connect_time=false
jmeter.save.saveservice.bytes=false
jmeter.save.saveservice.sent_bytes=false
jmeter.save.saveservice.idle_time=false
jmeter.save.saveservice.print_field_names=false
jmeter.save.saveservice.thread_counts=false
The same can be done using -J command-line option like:
jmeter -Jjmeter.save.saveservice.assertion_results_failure_message=false -Jjmeter.save.saveservice.data_type=false -Jjmeter.save.saveservice.label=false -Jjmeter.save.saveservice.response_code=false -Jjmeter.save.saveservice.response_message=false -Jjmeter.save.saveservice.successful=false -Jjmeter.save.saveservice.thread_name=false -Jjmeter.save.saveservice.time=false -Jjmeter.save.saveservice.assertions=false -Jjmeter.save.saveservice.latency=false -Jjmeter.save.saveservice.connect_time=false -Jjmeter.save.saveservice.bytes=false -Jjmeter.save.saveservice.sent_bytes=false -Jjmeter.save.saveservice.idle_time=false -Jjmeter.save.saveservice.print_field_names=false -Jjmeter.save.saveservice.thread_counts=false -n -t test.jmx -l result.jtl
In order to create a separate result file per request you can use Flexible File Writer listener which allows storing arbitrary metrics. You will need to add Flexible File Writer as a child of each Sampler which response you would like to store. Flexible File Writer can be installed using JMeter Plugins Manager
Like Dmitri T said, it is not possible to create charts for custom data in current JMeter version.
This is my WCF service, where user can find message for him.
Simple:
[OperationContract]
[WebGet(UriTemplate = "/GetMessages/{UserGLKNumber}/{UserPassword}/{SessionToken}")]
Messages GetMessages(string SessionToken, string UserPassword, string UserGLKNumber);
I have concerns about that line: {UserGLKNumber}/{UserPassword}/{SessionToken}
I have to authenticate user, before he get that messages. But with GET method, I cannot send objects, like in POST.
Is it consistent with REST pattern?
Please, clear up my doubts.
There are already posts & question about this, I am summarizing all of them
POST verb is used when are you creating a new resource (a file in your case) and repeated operations would create multiple resources on the server. This verb would make sense if uploading a file with the same name multiple times creates multiple files on the server.
PUT verb is used when you are updating an existing resource or creating a new resource with a predefined id. Multiple operations would recreate or update the same resource on the server. This verb would make sense if uploading a file with the same name for the second, third... time would overwrite the previously uploaded file.
POST everytime you are modifying some state on the server like database update, delete. GET for readonly fetching like database select.
GET: Get a collection of entries (as a feed document) or a single entry (as an entry document).
POST: Create a new entry from an entry document.
PUT: Update an existing entry with an entry document.
DELETE: Remove an entry.
Source:Difference between PUT and POST using WCF REST
Another Useful reads are:
What's the difference between a POST and a PUT HTTP REQUEST?
http://www.codeproject.com/Articles/105273/Create-RESTful-WCF-Service-API-Step-By-Step-Guide
http://msdn.microsoft.com/en-us/magazine/dd315413.aspx
http://social.msdn.microsoft.com/Forums/vstudio/en-US/643e0d8b-80bb-45eb-8a84-318ac8de4497/difference-between-the-rest-verbs-put-and-post?forum=wcf
In terms of Restful services...
Post :
1. Its a secure to use in application rather than get.
2. Its not configure proxy server.
3. Big length of data restricted by web server.
4. Its not cached on browser.
5. Its take input as xml
Get :
1. Its a not secure to use in application rather than get.
2. Its configure proxy server.
3. Its use url encoding technique.
4. Its cached on browser.
5. Its a default if you are not declaring anyone.
6 Its take input as a string an returned a formatted output.
I have a script which takes POST data from an external application which then processes the data, this works fine when the POST data is small (i.e. 1-2MB).
We are now in a situation where we have much larger (40-50MB) data being sent up.
The data that is coming up is very basic, has a username, password and the actual data to process itself.
With the larger files, the PHP script is only seeing the username and password. There is no data key.
The application that is sending the data claims it is sending the full file.
I've tried mod_dumpio but not getting anything of any use (i.e. doesn't seem to be doing anything different on POST requests).
The post data was either being cut completely (in most cases) or partially truncated caused by suhosin. Increase the post max value length and request max value length solved the problem.