I need to map which endpoints are taking the longest from a log.
I have a query that catches all the most discouraged endpoints, but they may have duplicate endpoints, but with different request times.
My query:
fields request.url as URL, response.content.manager.company.label as Empresa, timestamp as Data, response.status as Status, request.time as TEMPO
| filter #logStream = 'production'
| sort request.time DESC
| limit 30
Result:
# | ENDPOINT | TIMESTAMP | COMPANY | STATUS CODE | TIME REQUEST
1 | /api/v1/login | 2020-02-01T11:14:00 | company-label | 200 | 0.9876
2 | /api/v1/register | 2020-02-01T11:11:00 | company-label | 200 | 0.5687
3 | /api/v1/login | 2020-02-01T00:00:00 | company-label | 200 | 0.2345\
I need to unify by endpoint, for example:
# | ENDPOINT | TIMESTAMP | COMPANY | STATUS CODE | TIME REQUEST
1 | /api/v1/login | 2020-02-01T11:14:00 | company-label | 200 | 0.9876
2 | /api/v1/register | 2020-02-01T11:11:00 | company-label | 200 | 0.5687\
Unify by endpoint and get the last "time" to show
Thank you!
I found the solution to this question.
filter #logStream = 'production'
| filter ispresent(request.time)
| stats avg(request.time) as MEDIA by request.uri as ENDPOINT
| sort MEDIA DESC
| limit 30
Using the stats avg(request.time) as MEDIA to grouping data and capture an media to this ENDPOINT.
This question already has an answer here:
How to compare 2 JSON objects that contains array using Karate tool and feature files
(1 answer)
Closed 1 year ago.
Here is my script:
Scenario Outline: Test
* def body = {}
* set body
| path | value |
| name | <name> |
| metadata.<key> | <value> |
Given url 'http://localhost/'
* request body
When method post
Examples:
| name | key | value |
| 'John' | 'key1' | 'value1' |
| 'Jane' | | |
When I post the request I get the body as:
{"name": "John", "metadata": {"'key1'": "value1"}}
How do I get the metadata.key to be "key1"?
It is simpler than you think:
Scenario Outline: Test
* def body = { name: '#(name)' }
* body[key] = value
* print body
Examples:
| name | key | value |
| John | key1 | value1 |
| Jane | key2 | value2 |
Also refer: https://github.com/intuit/karate#scenario-outline-enhancements
EDIT: if you really have wildly different payloads in each row, I personally recommend you create a separate Scenario - in my opinion, trying to squeeze everything into a single super-generic-dynamic Scenario just leads to readability and maintainability issues, refer: https://stackoverflow.com/a/54126724/143475
That said, you can do this:
Scenario Outline: Test
* print body
Examples:
| body! |
| { "name": "John", "metadata": { "key1": "value1" } } |
| { "name": "Jane" } |
There are "smart" ways to remove some parts of a JSON like this: https://github.com/intuit/karate#remove-if-null - but you can choose which approach is simpler.
The following is the scenario Outline i have used . In the first and second row the display_name is empty but still display_name is being sent in my request payload .
Scenario Outline: Negative cases
Given path '/api/check'
And request {name: <name> , description: <description> , display_name: <display_name>}
When method POST
Then status <status>
Examples:
| name | description | display_name |status |
|"" | "sasadss" | | 400 |
|"fddsd" | "" | | 400 |
| "ccs" | "" | "disp " | 400 |
Unfortuately Cucumber Example tables send an empty string. You can use table as an alternative or you can put the whole JSON into a column, many teams do this.
| value |
| { some: 'json' } |
Refer to this example: https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/outline/examples.feature
In my Rails app, I'm trying to perform a query without ActiveRecord. Essentially what I want to do is select records whose created_at matches a given DateTime, then group the records by string type, then average their values. Note: I'm using PostgreSQL.
So for example, running the desired query on the Movie records below would yield something like:
{ Blah: 6, Hi: 2, Hello: 4}
id | value | event_id | user_id | created_at | updated_at | type
----+-------+----------+---------+----------------------------+----------------------------+-------------
1 | 1 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.562552 | Blah
2 | 10 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.574191 | Blah
3 | 1 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.577179 | Hi
4 | 2 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.578864 | Hi
5 | 7 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.580517 | Hello
6 | 1 | 1 | 1 | 2014-01-22 03:42:44.86269 | 2014-02-15 01:54:15.58203 | Hello
(6 rows)
I think I can piece together the group by and average points, but I'm running into a wall trying to match records based on the created_at. I've tried:
select * from movies where 'movies.created_at' = '2014-01-22 03:42:44.86269'
And a few other variations where I try to call to_char, including:
select * FROM movies WHERE 'movies.created_at' = to_char('2014-01-22 03:42:44.86269'::TIMESTAMP, 'YYYY-MM-DD HH24:MI:SS');
The ActiveModel record for the first record in the above looks like this:
=> #<Movie id: 1, value: "1", event_id: 1, user_id: 1, created_at: "2014-01-22 03:42:44", updated_at: "2014-02-15 01:54:15", type: "Blah">
Its created_at, which is an ActiveSupport::TimeWithZone class looks like:
=> Wed, 22 Jan 2014 03:42:44 UTC +00:00
I imagine it has something to do with UTC but I can't figure it out. If anyone has any ideas I'd greatly appreciate it.
Single-quoted values are interpreted by Postgres as literal strings. So your first query is looking for records where the literal string movies.created_at is equal to the literal string 2014-01-22 03:42:44.86269 - none of which exist.
Quoted identifiers in Postgres are quoted with double-quotes; also note that references with explicit table references (movies.created_at) are correctly quoted with the dot outside the quotes ("movies"."created_at") - if the dot is inside the quotes, it is interpreted as part of the column name.
You may want to keep the Postgres SQL reference handy in the future. :)
How can I use update_all, if I want to update a column of 300,000 records all with a variety of different values?
What I want to do is something like:
Model.update_all(:column => [2,33,94,32]).where(:id => [22974,22975,22976,22977])
But unfortunately this doesn't work, and it's even worse for 300,000 entries.
From the ActiveRecord#update documentation:
people = { 1 => { "first_name" => "David" }, 2 => { "first_name" => "Jeremy" } }
Person.update(people.keys, people.values)
So in your case:
updates = {22974 => {column: 2}, 22975 => {column: 33}, 22976 => {column: 94}, 22977 => {column: 32}}
Model.update(updates.keys, updates.values)
Edit: Just had a look at the source, and this is generating n SQL queries too... So probably not the best solution
The only way I found to do it is to generate INSERT INTO request with updated values. I'm using gem "activerecord-import" for that.
For example,
I have a table with val values
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
| pkey | id | site_id | feature_id | val | created_at | updated_at |
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
| 1 | | 125 | 7 | 88 | 2016-01-27 10:25:45 UTC | 2016-02-05 11:18:14 UTC |
| 111765 | 0001-0000024 | 125 | 7 | 86 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:18:14 UTC |
| 111766 | 0001-0000062 | 125 | 7 | 15 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:18:14 UTC |
| 111767 | 0001-0000079 | 125 | 7 | 19 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:18:14 UTC |
| 111768 | 0001-0000086 | 125 | 7 | 33 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:18:14 UTC |
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
select records
products = CustomProduct.limit(5)
update records as you need
products.each_with_index{|p, i| p.val = i}
save records in single request
CustomProduct.import products.to_a, :on_duplicate_key_update => [:val]
All you records will be updated in single request. Please find out gem "activerecord-import" documentation for more details.
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
| pkey | id | site_id | feature_id | val | created_at | updated_at |
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
| 1 | | 125 | 7 | 0 | 2016-01-27 10:25:45 UTC | 2016-02-05 11:19:49 UTC |
| 111765 | 0001-0000024 | 125 | 7 | 1 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:19:49 UTC |
| 111766 | 0001-0000062 | 125 | 7 | 2 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:19:49 UTC |
| 111767 | 0001-0000079 | 125 | 7 | 3 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:19:49 UTC |
| 111768 | 0001-0000086 | 125 | 7 | 4 | 2016-01-27 11:33:22 UTC | 2016-02-05 11:19:49 UTC |
+--------+--------------+---------+------------+-----+-------------------------+-------------------------+
the short answer to your question is, you can't.
The point of update_all is to assign the same value to the column for all records (matching the condition if provided). The reason that is useful is that it does it in a single SQL statement.
I agree with Shime's answer for correctness. Although that will generate n SQL calls. So, maybe there is something more to your problem you're not telling us. Perhaps you can iterate over each possible value, calling update_all for the objects that should get updated with that value. Then it's a matter of either building the appropriate hash, or, even better, if the condition is based on something in the Model itself, you can pass the condition to update_all.
This is my 2020 answer:
The most upvoted answer is wrong; as the author himself states, it will trigger n SQL queries, one for each row.
The second most upvoted answer suggests gem "activerecord-import", which is the way to go. However, it does so by instantiating ActiveRecord models, and if you are in business for a gem like this, you're probably looking for extreme performance (it was our case anyways).
So this is what we did. First, you build an array of hashes, each hash containing the id of the record you want to update and any other fields.
For instance:
records = [{ id: 1, name: 'Bob' }, { id: 2, name: 'Wilson' },...]
Then you invoke the gem like this:
YourModelName.import(records, on_duplicate_key_update: [:name, :other_columns_whose_keys_are_present_in_the_hash], validate: false, timestamps: false)
Explanation:
on_duplicate_key_update means that, if the database finds a collision on primary key (and it will on every row, since we're talking about updating existing records), it will NOT fail, and instead update the columns you pass on that array.
If you don't validate false (default is true), it will try to instantiate a new model instance for each row, and probably fail due to validation (since your hashes only contain partial information).
timestamp false is also optional, but good to know it's there.