How can I do this on sql or arel/activerecord, without "ruby"?
Event scheme:
integer "action" [0,1,2]
string "name"
float "price"
datetime "created_at"
datetime "updated_at"
Model method:
# interval maybe "2 hour", "1 hour", "15 minutes" etc.
def self.statistics(start_time, end_time, interval)
stats = Event.where(created_at: start_time..end_time).group_by do |e|
end_time - ((end_time - e.created_at).to_i / interval.to_f).floor * interval
end.map do |k, e|
gb = e.group_by {|e| e.action}
[k,
{
profit: e.reduce(0) {|m, e| m += e.price},
actions: (0..2).map {|v| gb.has_key?(v) ? gb[v].count : 0}
}
]
end.to_h
stats
end
Output:
{2015-10-12 11:00:00 +0300=>{:profit=>14685.0, :actions=>[86, 92, 105]},...}
Related
I am trying to assert the values inside a single dimensional array. I have tried using match but it looks like the date ranges cannot be asserted.
Below is the object array:
[
"2019-04-24T17:41:28",
"2019-04-24T17:41:27.975",
"2019-04-24T17:41:27.954",
"2019-04-24T17:41:27.93",
"2019-04-24T17:41:27.907",
"2019-04-24T17:41:27.886",
"2019-04-24T17:41:27.862",
"2019-04-24T17:41:27.84",
"2019-04-24T17:41:27.816",
"2019-04-24T17:41:27.792"
]
I am trying to assert each values between the following date ranges:
MinDate:2019-04-24T17:25:00.000000+00:00
MaxDate:2019-04-24T17:50:00.000000+00:00
I have tried the following but none works:
* match dateCreated == '#[]? _.value >= fromDate'
* eval for(var i = 0; i < responseJson.response.data.TotalItemCount; i++) dateCreated.add(responseJson.response.data.Items[i].DateCreated) karate.assert(dateCreated[i] >= fromDate)
Any hint/tip on how to go about it.
Here you go:
* def dateToLong =
"""
function(s) {
var SimpleDateFormat = Java.type('java.text.SimpleDateFormat');
var sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS");
return sdf.parse(s).time;
}
"""
* def min = dateToLong('2019-04-24T17:25:00.000')
* def max = dateToLong('2019-04-24T17:50:00.000')
* def isValid = function(x){ var temp = dateToLong(x); return temp >= min && temp <= max }
* def response =
"""
[
"2019-04-24T17:41:27.975",
"2019-04-24T17:41:27.954",
"2019-04-24T17:41:27.93",
"2019-04-24T17:41:27.907",
"2019-04-24T17:41:27.886",
"2019-04-24T17:41:27.862",
"2019-04-24T17:41:27.84",
"2019-04-24T17:41:27.816",
"2019-04-24T17:41:27.792"
]
"""
* match each response == '#? isValid(_)'
Please refer the docs if you have doubts about any of the keywords. I removed the first date in the list because it was not consistent, but you have enough info to handle it if needed - you may need some conditional logic somewhere.
Also see:
https://stackoverflow.com/a/54114432/143475
https://stackoverflow.com/a/52892797/143475
Here table
column states is jsonb.
Example of states (json)
[
{
"dt": "2020-12-23T16:15:18.405+00:00",
"id": "order.new",
"data": {}
}
]
Current date is 23 Dec 2020 and current time is 20:26.
Now I want to show records with modified_at > 900 sec (15 min) and states->0->'id' = 'order.new'
I try this:
SELECT id, modified_at, states->0 as state FROM shop_order
WHERE (states ->0 #> '{"id":"order.new"}')
AND (extract(epoch from CURRENT_TIMESTAMP - modified_at)::integer > 900)
But result is empty
I need to mark rows in a time series where the timestamps fall between given time-of-day blocks; when I have eg
values = ([ 'motorway' ] * 5000) + ([ 'link' ] * 300) + ([ 'motorway' ] * 7000)
df = pd.DataFrame.from_dict({
'timestamp': pd.date_range(start='2018-1-1', end='2018-1-2', freq='s').tolist()[:len(values)],
'road_type': values,
})
df.set_index('timestamp', inplace=True)
I need to add a column rush that marks rows where timestamp is between 06:00 and 09:00 or 15:30 and 19:00. I've seen between_time but I don't know how to apply it here.
edit: based on this answer I managed to put together
df['rush'] = df.index.isin(df.between_time('00:00:15', '00:00:20', include_start=True, include_end=True).index) | df.index.isin(df.between_time('00:00:54', '00:00:59', include_start=True, include_end=True).index)
but I wonder whether there isn't a more elegant way.
One alternative using between
from datetime import time as t
values = ([ 'motorway' ] * 5000) + ([ 'link' ] * 300) + ([ 'motorway' ] * 7000)
df = pd.DataFrame.from_dict({
'timestamp': pd.date_range(start='2018-1-1', end='2018-1-2',
freq='s').tolist()[:len(values)],
'road_type': values,
})
time = df['timestamp'].dt.time
df['rush'] = (time.between(t(0,6,0), t(0,9,0)) | time.between(t(0,15,30),t(0,19,0))).values
Or slicing the df using datetime.time
df = df.set_index(df.timestamp.dt.time)
df['rush'] = df.index.isin(df[t(0,6,0):t(0,9,0)].index | df[t(0,15,30):t(0,19,0)].index)
df = df.reset_index(drop=True)
I have this method in my Rails API that keep sending me High Response Time alerts. I have tried to optimize it as much as I could according to my current knowledge but it's still not doing the job apparently.
Any help on how to optimize these queries would be much appreciated:
This is my method to fetch markers and send them over to my API
First I fetch the addresses
longitude = params[:longitude]
latitude = params[:latitude]
#addresses = Address.joins('INNER JOIN users ON users.id = addresses.addressable_id')
.joins('INNER JOIN items ON items.user_id = users.id')
.where('items.name IS NOT NULL').where("items.name <> ''")
.where('items.visibility = TRUE')
.where('items.photo IS NOT NULL').where("items.photo <> ''")
.where('addresses.latitude IS NOT NULL AND addresses.addressable_type = ? ', "User")
.near([latitude, longitude], (params[:distance].to_i + 1000))
Second, I use these addresses to render a JSON object back to my API
I have a checkitem method
def checkitem(item)
begin
requests = Request.where('item_id = ? AND created_at < ? AND created_at > ?', item.id, (DateTime.now - 1.day), (DateTime.now - 6.months)).pluck(:status)
if (requests.exists? && requests.count > 2)
if requests.count('pending') >= 3 && (item.user.current_sign_in_at.present? && item.user.current_sign_in_at < (DateTime.now - 2.weeks))
false
else
true
end
elsif (requests == [] || requests.count <= 2)
true
elsif (item.user.current_sign_in_at.present? && item.user.current_sign_in_at > (DateTime.now - 2.weeks)) || item.user.created_at > (DateTime.now - 2.weeks)
true
else
false
end
rescue
true
end
end
Then I render my JSON
#places = Address.where(addressable_type: 'Item').where.not(type_add: nil).near([latitude, longitude], 10)
render json: {markers: #addresses.uniq.map { |address|
[{
name: address.user.items.first.name,
photo: { uri: address.user.items.first.photo.url },
id: Item.where(user_id: address.addressable_id).first.id,
latitude: address.latitude,
longitude: address.longitude,
breed: address.user.items.first.breed.id,
innactive: checkitem(address.user.items.first) ? false : true,
power: (address.user.items.first.requests.count >= 2 && address.user.items.first.requests.last(3).map(&:status).count('pending') < 1) ? true : false,
}]
}.reject { |e| e.nil? }.flatten.first(100)
}
end
#address.explain
=> EXPLAIN for: SELECT addresses.*, 3958.755864232 * 2 * ASIN(SQRT(POWER(SIN((45.501689 - addresses.latitude) * PI() / 180 / 2), 2) + COS(45.501689 * PI() / 180) * COS(addresses.latitude * PI() / 180) * POWER(SIN((-73.567256 - addresses.longitude) * PI() / 180 / 2), 2))) AS distance, MOD(CAST((ATAN2( ((addresses.longitude - -73.567256) / 57.2957795), ((addresses.latitude - 45.501689) / 57.2957795)) * 57.2957795) + 360 AS decimal), 360) AS bearing FROM "addresses" INNER JOIN users ON users.id = addresses.addressable_id INNER JOIN items ON items.user_id = users.id WHERE (items.name IS NOT NULL) AND (items.name <> '') AND (items.visibility = TRUE) AND (items.photo IS NOT NULL) AND (items.photo <> '') AND (addresses.latitude IS NOT NULL AND addresses.addressable_type = 'User' ) AND (addresses.latitude BETWEEN 31.028510688915205 AND 59.97486731108479 AND addresses.longitude BETWEEN -94.21702228070411 AND -52.91748971929589 AND (3958.755864232 * 2 * ASIN(SQRT(POWER(SIN((45.501689 - addresses.latitude) * PI() / 180 / 2), 2) + COS(45.501689 * PI() / 180) * COS(addresses.latitude * PI() / 180) * POWER(SIN((-73.567256 - addresses.longitude) * PI() / 180 / 2), 2)))) BETWEEN 0.0 AND 1000) ORDER BY distance ASC
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=224.28..224.28 rows=1 width=138)
Sort Key: (('7917.511728464'::double precision * asin(sqrt((power(sin((((('45.501689'::double precision - addresses.latitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision) + (('0.70088823836273'::double precision * cos(((addresses.latitude * '3.14159265358979'::double precision) / '180'::double precision))) * power(sin((((('-73.567256'::double precision - addresses.longitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision)))))))
-> Nested Loop (cost=0.11..224.28 rows=1 width=138)
-> Nested Loop (cost=0.06..207.10 rows=39 width=8)
-> Seq Scan on items (cost=0.00..126.62 rows=39 width=4)
Filter: ((name IS NOT NULL) AND visibility AND (photo IS NOT NULL) AND ((name)::text <> ''::text) AND ((photo)::text <> ''::text))
-> Index Only Scan using users_pkey on users (cost=0.06..2.06 rows=1 width=4)
Index Cond: (id = items.user_id)
-> Index Scan using index_addresses_on_addressable_type_and_addressable_id on addresses (cost=0.06..0.44 rows=1 width=98)
Index Cond: (((addressable_type)::text = 'User'::text) AND (addressable_id = users.id))
Filter: ((latitude IS NOT NULL) AND (latitude >= '31.0285106889152'::double precision) AND (latitude <= '59.9748673110848'::double precision) AND (longitude >= '-94.2170222807041'::double precision) AND (longitude <= '-52.9174897192959'::double precision) AND (('7917.511728464'::double precision * asin(sqrt((power(sin((((('45.501689'::double precision - latitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision) + (('0.70088823836273'::double precision * cos(((latitude * '3.14159265358979'::double precision) / '180'::double precision))) * power(sin((((('-73.567256'::double precision - longitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision)))))) >= '0'::double precision) AND (('7917.511728464'::double precision * asin(sqrt((power(sin((((('45.501689'::double precision - latitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision) + (('0.70088823836273'::double precision * cos(((latitude * '3.14159265358979'::double precision) / '180'::double precision))) * power(sin((((('-73.567256'::double precision - longitude) * '3.14159265358979'::double precision) / '180'::double precision) / '2'::double precision)), '2'::double precision)))))) <= '1000'::double precision))
(11 rows)
You have not so easy question and my answer is built on my assumption and code that I see. I am sure that with your feedbacks and cooperation we will make it :)
I think that first major issue is that you have separate queries to requests table for each item_id and this is definitely a bottleneck.
STEP1: You can improve your fetch addresses code as follows:
#addresses = Address.joins("INNER JOIN users ON users.id = addresses.addressable_id AND addresses.addressable_type = 'User' INNER JOIN items ON items.user_id = users.id")
.where.not({
items: {
name: [nil, ''],
photo: [nil, ''],
visibility: false
},
addresses: { latitude: nil }
})
.near([latitude, longitude], (params[:distance].to_i + 1000))
.select('addresses.*, items.id AS item_id')
STEP2: Remove #places = query. At least I don't see any place where you use it
STEP3: Prevent (N + 1) queries with includes:
#requests = Request.where(item_id: #addresses.map(&:item_id).uniq).where('created_at < ? AND created_at > ?', (DateTime.now - 1.day), (DateTime.now - 6.months)).to_a
render json: {markers: #addresses.uniq.map { |address|
[{
name: address.user.items.first.name,
photo: { uri: address.user.items.first.photo.url },
id: Item.where(user_id: address.addressable_id).first.id,
latitude: address.latitude,
longitude: address.longitude,
breed: address.user.items.first.breed.id,
innactive: checkitem(#address.user.items.first, #requests) ? false : true,
power: (address.user.items.first.requests.count >= 2 && address.user.items.first.requests.last(3).map(&:status).count('pending') < 1) ? true : false,
}]
}.reject { |e| e.nil? }.flatten.first(100)
}
end
STEP4: Remove queries from checkitem:
def checkitem(item, requests)
begin
statuses = requests.select { |r| r.item_id = item.id }.map(&:status)
if (requests.exists? && requests.count > 2)
if requests.count('pending') >= 3 && (item.user.current_sign_in_at.present? && item.user.current_sign_in_at < (DateTime.now - 2.weeks))
false
else
true
end
elsif (requests == [] || requests.count <= 2)
true
elsif (item.user.current_sign_in_at.present? && item.user.current_sign_in_at > (DateTime.now - 2.weeks)) || item.user.created_at > (DateTime.now - 2.weeks)
true
else
false
end
rescue
true
end
end
This code still smell a lot, but let's take it as a first step and go further. For additional changes I will need a bit more code pieces/etc, but I really assume that this should remove main bottleneck.
I don't know anything about Crystal on Train Track but if your issue is specifically caused by SQL queries which are taking too long to post an output. You can try these.
You join a Users_Table with addresses then You take items_Table and join it to previous operation.
Before you filter these;
items_name should be NOT NULL
items_name should be not ''
items_photo should be NOT NULL
items_photo should be not ''
items.visibility = TRUE
addresses.latitude should not be NULL
and some more which can not be easily avoided i assume.
I am not sure about your design but some of the above can be avoided. What i would do creating a VIEW. Static conditions between users like NOT NULLs would already be filtered and wouldn't need to be executed each time.
A view called (Showable_Items) which are all items but
items_name should be NOT NULL
items_name should be not ''
items_photo should be NOT NULL
items_photo should be not ''
items.visibility should be TRUE
A view called (Addressable_Addresses) which are all addresses but
addresses.latitude should not be NULL
And join these 2 view with ready to use content.
Also try;
What is your most critical filter parameter. coordinates comparison possibly filters 99.9% of the table. So this table should also be divided. VIEWS again . ALL_ADDRESSES_VIEW BUT which latitude between 10 and 15 and such. whatever makes sense with your design.
I am trying to use Geocoder
I am trying to get results for the following query (I have this in my controller):
Event.near([params[:lat].to_f, params[:lng].to_f], params[:radius].to_f, unit: :km)
In my model I have:
geocoded_by :address, :latitude => :lat, :longitude => :lng
after_validation :geocode
but I get the following:
ActiveRecord::StatementInvalid: SQLite3::SQLException: no such column: lon: SELECT events.*, (69.09332411348201 * ABS(lat - -33.425329) * 0.7071067811865475) + (59.836573914187355 * ABS(lon - -70.604895) * 0.7071067811865475) AS distance, CASE WHEN (lat >= -33.425329 AND lon >= -70.604895) THEN 45.0 WHEN (lat < -33.425329 AND lon >= -70.604895) THEN 135.0 WHEN (lat < -33.425329 AND lon < -70.604895) THEN 225.0 WHEN (lat >= -33.425329 AND lon < -70.604895) THEN 315.0 END AS bearing FROM "events" WHERE (lat BETWEEN -33.714792566221696 AND -33.1358654337783 AND lon BETWEEN -70.95172225903696 AND -70.25806774096304) ORDER BY distance
Any help is appreciated
This always happens... we should start by doing this before asking here: restart computer.
After rebooting the computer the problem was fixed...don't know why it was necessary do to it tho. Maybe something got stuck in memory and needed some hard reset..