I'm using https://www.awql.me to build request and the first one below works, I'm successfully able to retrieve all campaigns with datas from past 7 days :
SELECT CampaignId, CampaignName, Clicks, Impressions
FROM CAMPAIGN_PERFORMANCE_REPORT
DURING LAST_7_DAYS
But when I try to add CampaignStatus and/or ORDER BY and/or LIMIT, I've got the following error message:
Underlying errors are
Type = 'QueryError.LIMIT_CLAUSE_NOT_SUPPORTED', Trigger = '', FieldPath = ''
There is below the request that cause the issue (I also tried to just use CampaignStatus, ORDER BY and LIMIT separately but the same error occured) :
SELECT CampaignId, CampaignName, Clicks, Impressions
FROM CAMPAIGN_PERFORMANCE_REPORT
WHERE CampaignStatus = 'Enabled'
DURING LAST_7_DAYS
ORDER BY Clicks DESC
LIMIT 0,5
I read that it's not possible to use ORDER BY and LIMIT with CAMPAIGN_PERFORMANCE_REPORT, so how do you guys get around this limitation to retrieve formated datas in the response, at a campaigns level ?
Did you find a way to make the status works in your AWQL request ?
Thanks a lot !
The problem with your CampaignStatus filter is that the status value should be ENABLED instead of Enabled.
As for LIMIT and ORDER BY, these are indeed not supported in AWQL. You'll have to process the data on your end.
Related
I have a table with sessions events names. Each session can have 3 different types of events.
There are sessions that have only error type event and I need to identify them by getting a list those session.
I tried the following code:
SELECT
test.SessionId, SS.RequestId
FROM
(SELECT DISTINCT
SSE.SessionId,
SSE.type,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId, SSE.type) AS total_XSESIONID_TYPE,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId) AS total_XSESIONID
FROM
[CMstg].SessionEvents SSE
-- WHERE SSE.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
) AS test
WHERE
test.total_XSESIONID_TYPE = test.total_XSESIONID
AND test.type = 'Errors'
-- AND test.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
Each session can have more than one type, and I need to count only the sessions that have only type 'errors'. I don't want to include sessions that have additional types of events in the count
While I'm running the first query I'm getting a count of 3 error event per session, but while running the all procedure the number is multiplied to 90?
Sample table :
sessionID
type
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
NonError
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
In this case I should get
sessionid = fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Please advice - hope this is clearer now, thanks!
It's been a long time but I think something like this should get you the desired results:
SELECT securemeSessionId
FROM <TableName> -- replace with actual table name
GROUP BY securemeSessionId
HAVING COUNT(*) = COUNT(CASE WHEN type = 'errors' THEN 1 END)
And a pro tip: When asking sql-server questions, it's best to follow these guidelines
SELECT *
FROM NameOfDataBase
WHERE type!= 'errors'
Is it what you wanted to do?
When I run the following query, my Netezza NPS reboots. Would someone please let me know what is causing this behaviour?
select avg ( bse.WEEKS_BETWEEN_RESPONSES_HR ) as g_AVG
, sqlext.median( bse.WEEKS_BETWEEN_RESPONSES_HR ) as g_med
from (
select WEEKS_BETWEEN_RESPONSES_HR
FROM (
select distinct LOYALTY_ACCOUNT_CARD_ID
, BONUS_END_DATE
, LAG(BONUS_END_DATE,1) OVER (partition by LOYALTY_ACCOUNT_CARD_ID order by BONUS_END_DATE) as PRIOR_BONUS_END_DATE
,(( BONUS_END_DATE - PRIOR_BONUS_END_DATE)/7) as WEEKS_BETWEEN_RESPONSES_HR
from JO_ACT_PTD_STEP_1 bse
where upper ( bonus_desc ) like '%SPEND%'
and redemption = 1
) BSE
where WEEKS_BETWEEN_RESPONSES_HR is not null and WEEKS_BETWEEN_RESPONSES_HR > 0
) bse limit 500 ```
You need to call the support people at IBM
There is probably a stack trace or a dump file somewhere that will tell them what happened
If I was experiencing your problem I would remove each of the function calls one by one and make the sql simpler and simpler until the error disappeared
But of course you will need to do that in the middle of the night or at a time when nobody else is being bothered by the constant re-boots
I have a magento site with few extensions. The main extension is giftcard extension for unique codes. We are running promotion right now with 800K codes so it is creating huge traffic. Problem is now it is creating ghost orders as after taking payment - on the last moment when order must be registered from reserved order to confirmed - it shows table lockout error.
Exact error is:
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction, query was: INSERT INTO sales_flat_order_grid (entity_id, status, store_id, customer_id, base_grand_total, base_total_paid, grand_total, total_paid, increment_id, base_currency_code, order_currency_code, store_name, created_at, updated_at, billing_name, shipping_name) SELECT main_table.entity_id, main_table.status, main_table.store_id, main_table.customer_id, main_table.base_grand_total, main_table.base_total_paid, main_table.grand_total, main_table.total_paid, main_table.increment_id, main_table.base_currency_code, main_table.order_currency_code, main_table.store_name, main_table.created_at, main_table.updated_at, CONCAT(IFNULL(table_billing_name.firstname, ''), ' ', IFNULL(table_billing_name.middlename, ''), ' ', IFNULL(table_billing_name.lastname, '')) AS billing_name, CONCAT(IFNULL(table_shipping_name.firstname, ''), ' ', IFNULL(table_shipping_name.middlename, ''), ' ', IFNULL(table_shipping_name.lastname, '')) AS shipping_name FROM sales_flat_order AS main_table LEFT JOIN sales_flat_order_address AS table_billing_name ON main_table.billing_address_id=table_billing_name.entity_id LEFT JOIN sales_flat_order_address AS table_shipping_name ON main_table.shipping_address_id=table_shipping_name.entity_id WHERE (main_table.entity_id IN('140650')) ON DUPLICATE KEY UPDATE entity_id = VALUES(entity_id), status = VALUES(status), store_id = VALUES(store_id), customer_id = VALUES(customer_id), base_grand_total = VALUES(base_grand_total), base_total_paid = VALUES(base_total_paid), grand_total = VALUES(grand_total), total_paid = VALUES(total_paid), increment_id = VALUES(increment_id), base_currency_code = VALUES(base_currency_code), order_currency_code = VALUES(order_currency_code), store_name = VALUES(store_name), created_at = VALUES(created_at), updated_at = VALUES(updated_at), billing_name = VALUES(billing_name), shipping_name = VALUES(shipping_name)
There seems to have no reference except : 140650 for entity id for sales_flat_order_grid.
If any one have any idea please let me know the possible solution.
This issue may be related to a bug in earlier versions of Magento. It used to be that the INSERT into sales_flat_order_grid occurred within a transaction. And because of the query that Magento uses to populate it the query planner could not figure out which rows to lock it locked the entire sales_flat_order_grid table. And because that happens within a transaction the lock is retained until COMMIT.
If this is what is causing your problem you will need to resolve the move the order grid calculation to commit_after.
Ok, I am replying my own question - in case someone else find helpful.
Solution: run Insert query for missing entity_id from sales_flat_order, run above query by replacing 140650 with main_table.entity_id NOT IN (select sales_flat_order.entity_id from sales_flat_order) and that will solve it. This is because both tables share same entity_id values.
I also came to know please set slow queries log as you can see which query taking long ( they are the one creating deadlock as db server resources gets occupied there ) - soon as I find such and resolved the third party extension slow query problem - no error of table locks.
Edit: Tidied up the query a bit. Checked running on one day (versus the 27 I need) and the query runs. With 27 days of data it's trying to process 5.67TB. Could this be the issue?
Latest ID of error run:
Job ID: ee-corporate:bquijob_3f47d425_1530e03af64
I keep getting this error message when trying to run a query in BigQuery, both through the UI and Bigrquery.
Query Failed
Error: An internal error occurred and the request could not be completed.
Job ID: ee-corporate:bquijob_6b9bac2e_1530dba312e
Code below:
SELECT
CASE WHEN d.category_grouped IS NULL THEN 'N/A' ELSE d.category_grouped END AS category_grouped_cleaned,
COUNT(UNIQUE(msisdn_token)) AS users,
(SUM(up_link_data_bytes) + SUM(down_link_data_bytes))/1000000 AS tot_data_mb
FROM (
SELECT
request_domain, up_link_data_bytes, down_link_data_bytes, msisdn_token, timestamp
FROM (TABLE_DATE_RANGE([helpful-skyline-97216:WEBLOG_Staging.WEBLOG_], TIMESTAMP('20160101'), TIMESTAMP('20160127')))
WHERE SUBSTR(http_status_code,1,1) IN ('1',
'2',
'3')) a
LEFT JOIN EACH web_usage_201601.domain_to_cat_lookup_27JAN_with_groups d
ON
a.request_domain = d.request_domain
WHERE
DATE(timestamp) >= '2016-01-01'
AND DATE(timestamp) <= '2016-01-27'
GROUP EACH BY
1
Is there something I'm doing wrong?
The problem seems to be coming from UNIQUE() - it returns repeated field with too many elements in it. The error could be improved, but workaround for you would be to use explicit GROUP BY and then run COUNT on top of it.
If you are okay with an approximation, you can also use
COUNT(DISTINCT msisdn_token) AS users
or a higher approximation parameter than the default 1000,
COUNT(DISTINCT msisdn_token, 5000) AS users
GROUP BY is the most general approach, but these can be faster if they do what you need.
The following code works for Postgres (Heroku):
#messages = Message.select("DISTINCT
ON (messages.conversation_id)
*").where("messages.sender_id = (?) OR messages.recipient_id = (?)",
current_user.id, current_user.id)
However, when attempting to order the results by appending .order("messages.read_at DESC") I receive the following error:
ActionView::Template::Error (PGError: ERROR: column id_list.alias_0 does not exist)
In looking at the generated SQL, I see that an alias is being created around the ORDER BY statement when not asked for:
messages.recipient_id = (32))) AS id_list ORDER BY id_list.alias_0 DESC)
I've not been able to figure out a workaround short of using "find_by_sql" for the entire statement - which takes a heavy toll on the app.
Don't vote this, I only post because posting many lines in comments does not show very well.
I would write a "query that returns messages grouped by their conversation_id, so that the last message in each conversation is shown" like this:
SELECT m.*
FROM messages m
JOIN
( SELECT conversation_id
, MAX(created_date) AS maxdate
FROM messages
WHERE ...
GROUP BY conversation_id
) AS grp
ON grp.conversation_id = m.conversation_id
AND grp.maxdate = m.created_date
ORDER BY m.read_at DESC
No idea how this can be done in Heroku or if it even possible, but it avoids the DISTINCT ON. If that's causing the error, it may be of help.