complex couchbase query using metadata & group by - sql

I am new to Couchbase and kind a stuck with the following problem.
This query works just fine in the Couchbase Query Editor:
SELECT
p.countryCode,
SUM(c.total) AS total
FROM bucket p
USE KEYS (
SELECT RAW "p::" || ca.token
FROM bucket ca USE INDEX (idx_cr)
WHERE ca._class = 'backend.db.p.ContactsDo'
AND ca.total IS NOT MISSING
AND ca.date IS NOT MISSING
AND ca.token IS NOT MISSING
AND ca.id = 288
ORDER BY ca.total DESC, ca.date ASC
LIMIT 20 OFFSET 0
)
LEFT OUTER JOIN bucket finished_contacts
ON KEYS ["finishedContacts::" || p.token]
GROUP BY p.countryCode ORDER BY total DESC
I get this:
[
{
"countryCode": "en",
"total": 145
},
{
"countryCode": "at",
"total": 133
},
{
"countryCode": "de",
"total": 53
},
{
"countryCode": "fr",
"total": 6
}
]
Now, using this query in a spring-boot application i end up with this error:
Unable to retrieve enough metadata for N1QL to entity mapping, have you selected _ID and _CAS?
adding metadata,
SELECT
meta(p).id AS _ID,
meta(p).cas AS _CAS,
p.countryCode,
SUM(c.total) AS total
FROM bucket p
trying to map it to the following object:
data class CountryIntermediateRankDo(
#Id
#Field
val id: String,
#Field
#NotNull
val countryCode: String,
#Field
#NotNull
val total: Long
)
results in:
Unable to execute query due to the following n1ql errors:
{“msg”:“Expression must be a group key or aggregate: (meta(p).id)“,”code”:4210}
Using Map as return value results in:
org.springframework.data.couchbase.core.CouchbaseQueryExecutionException: Query returning a primitive type are expected to return exactly 1 result, got 0
Clearly i missed something important here in terms of how to write proper Couchbase queries. I am stuck between needing metadata and getting this key/aggregate error that relates to the GROUP BY clause. I'd be very thankful for any help.

When you have a GROUP BY query, everything in the SELECT clause should be either a field used for grouping or a group aggregate. You need to add the new fields into the GROUP by statement, sort of like this:
SELECT
_ID,
_CAS,
p.countryCode,
SUM(p.c.total) AS total
FROM testBucket p
USE KEYS ["foo", "bar"]
LEFT OUTER JOIN testBucket finished_contacts
ON KEYS ["finishedContacts::" || p.token]
GROUP BY p.countryCode, meta(p).id AS _ID, meta(p).cas AS _CAS
ORDER BY total DESC
(I had to make some changes to your query to work with it effectively. You'll need to retrofit the advice to your specific case.)
If you need more detailed advice, let me suggest the N1QL forum https://forums.couchbase.com/c/n1ql . StackOverflow is great for one-and-done questions, but the forum is better for extended interactions.

Related

Cannot parse SQL result count from Logic App

I run this simple query in Logic App using the "Execute a SQL query (V2)" connector to find out if a number exists in my table.
select count(*) from users where user_number='724-555-5555';
If the number exist, I get this JSON , but somehow I cant parse it.
[
{
"": 1
}
]
Any idea how to simply retrieve 0 or 1 ?
Thanks
David
You need to add an explicit column name:
SELECT
count(*) AS cnt
FROM
users
WHERE
user_number = '724-555-5555';
That will give you this result:
[ { "cnt": 1 } ]
...which is valid JSON.

N1QL query count for each document of specific type

I am new to couchbase and to non-relational DB.
I have a bucket with players and teams(2 types of documents).
each player has type, playedFor(an array with all the teams he played) and a name for example:
{
"type":"player"
"name":"player1"
"playedFor": [
"England/Manchester/United"
"England/Manchester/City"
]
}
each team has type, name and category for example:
{
"type": "team"
"name": "England/Manchester/City"
"category": "FC"
}
I want to know how many players played for each team of category FC.
I made this query to calc for specific team:
SELECT COUNT(1) AS total
FROM bucket AS a
WHERE a.type='player'
AND (any r in a.playedFor satisfies r in ["England/Manchester/United"] end)
but how can i make this query for all teams?
The wrinkle in the way you've modeled this data is that player can play for 1 or more teams (hence the array).
One way to approach this is to use Couchbase's UNNEST clause to "flatten" these arrays (it's basically joining the document to each of the items in the array).
At that point, it becomes as easy as a standard GROUP BY. Here's an example:
SELECT team, count(1) AS totalPlayers
FROM `bucket` AS a
UNNEST a.playedFor team
WHERE a.type='player'
GROUP BY team
This query would generate output like:
[
{
"team": "Pittsburgh/Pirates",
"totalPlayers": 8
},
{
"team": "England/Manchester/United",
"totalPlayers": 10
},
{
"team": "England/Manchester/City",
"totalPlayers": 15
},
{
"team": "Cincinnati/Reds",
"totalPlayers": 21
}
]
(Sorry, I used MLB teams to augment your sample, since I don't know much about soccer teams).
Notice that the separate team documents don't figure into this query, but you could also JOIN to them if you need information from them for your quer(ies).

Sequelize Query - Count associated tables and count all for pagination

this is my first question on stackoverflow, never used it before but this issue is making me tear my hair out.
I'm building an infinite scroll component for a react app I'm working on a I'm trying to make a Postgres DB query work.
I have 2 tables - Challenges, and UserChallenges.
Challenges have many User Challenges.
I need to get a subsection of Challenges (from start to end) with each Challenge having a count of the number of "participants" (number of associated UserChallenges), and also a count of all challenges.
Something like this:
{
rows: [Challenge, Challenge, Challenge],
count: n
}
Where each challenge includes the total number of userChallenges as "participants" and count is a count of all challenges.
Here is the query:
let json_query = {
attributes: {
include: [[Sequelize.fn("COUNT", Sequelize.col("user_challenges.id")), "participants"]]
},
include: [{
model: UserChallenge, attributes: []
}],
order: [['timestamp', 'DESC']],
offset: start,
limit: end
}
The start and end quantities are the start and end of the pagination.
I'm running this query as follows:
var challengeInstances = await Challenge.findAndCountAll(json_query)
This results in the following error:
name: 'SequelizeDatabaseError',
parent: error: missing FROM-clause entry for table "user_challenges"
and this is the sql it's saying it's running:
`SELECT "challenge".* FROM (SELECT "challenge"."id", "challenge".*, COUNT("user_challenges"."id"), "challenge"."participants" FROM "challenges" AS "challenge" GROUP BY "challenge"."id" ORDER BY "challenge"."end_date" DESC LIMIT '4' OFFSET '0') AS "challenge" LEFT OUTER JOIN "user_challenges" AS "user_challenges" ON "challenge"."id" = "user_challenges"."challenge_id" ORDER BY "challenge"."end_date" DESC;`,
Sequelize or raw queries are both good.
Do let me know if you need any more information and thank you so so much.
you can use sequelize literal like this & remove object from attributes just paste this code for attributes .
attributes: [
[
sequelize.literal(`(
SELECT COUNT(id)
FROM user_challenges
WHERE
// your condition of foreign key like (user_challenges.participants_id = participants.id)
)`),
'numberOfParticipants'
]
]

How to query and iterate over array of structures in Athena (Presto)?

I have a S3 bucket with 500,000+ json records, eg.
{
"userId": "00000000001",
"profile": {
"created": 1539469486,
"userId": "00000000001",
"primaryApplicant": {
"totalSavings": 65000,
"incomes": [
{ "amount": 5000, "incomeType": "SALARY", "frequency": "FORTNIGHTLY" },
{ "amount": 2000, "incomeType": "OTHER", "frequency": "MONTHLY" }
]
}
}
}
I created a new table in Athena
CREATE EXTERNAL TABLE profiles (
userId string,
profile struct<
created:int,
userId:string,
primaryApplicant:struct<
totalSavings:int,
incomes:array<struct<amount:int,incomeType:string,frequency:string>>,
>
>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')
LOCATION 's3://profile-data'
I am interested in the incomeTypes, eg. "SALARY", "PENSIONS", "OTHER", etc.. and ran this query changing jsonData.incometype each time:
SELECT jsonData
FROM "sampledb"."profiles"
CROSS JOIN UNNEST(sampledb.profiles.profile.primaryApplicant.incomes) AS la(jsonData)
WHERE jsonData.incometype='SALARY'
This worked fine with CROSS JOIN UNNEST which flattened the incomes array so that the data example above would span across 2 rows. The only idiosyncratic thing was that CROSS JOIN UNNEST made all the field names lowercase, eg. a row looked like this:
{amount=1520, incometype=SALARY, frequency=FORTNIGHTLY}
Now I have been asked how many users have two or more "SALARY" entries, eg.
"incomes": [
{ "amount": 3000, "incomeType": "SALARY", "frequency": "FORTNIGHTLY" },
{ "amount": 4000, "incomeType": "SALARY", "frequency": "MONTHLY" }
],
I'm not sure how to go about this.
How do I query the array of structures to look for duplicate incomeTypes of "SALARY"?
Do I have to iterate over the array?
What should the result look like?
UNNEST is a very powerful feature, and it's possible to solve this problem using it. However, I think using Presto's Lambda functions is more straight forward:
SELECT COUNT(*)
FROM sampledb.profiles
WHERE CARDINALITY(FILTER(profile.primaryApplicant.incomes, income -> income.incomeType = 'SALARY')) > 1
This solution uses FILTER on the profile.primaryApplicant.incomes array to get only those with an incomeType of SALARY, and then CARDINALITY to extract the length of that result.
Case sensitivity is never easy with SQL engines. In general I think you should not expect them to respect case, and many don't. Athena in particular explicitly converts column names to lower case.
You can combine filter with cardinality to filter array elements having incomeType = 'SALARY' more than once.
This can be further improve so that intermediate array is not materialized by using reduce (see examples in the docs; I'm not quoting them here, since they do not directly answer your question).

Working with Structs within Arrays for new BigQuery Standard SQL

I'm trying to find rows with duplicate fields in an array of structs within a Google BigQuery table, using the new Standard SQL. The data in the table (simplified) where each row looks a bit like this:
{
"Session": "abc123",
"Information" [
{
"Identifier": "e8d971a4-ef33-4ea1-8627-f1213e4c67dc"
},
{
"Identifier": "1c62813f-7ec4-4968-b18b-d1eb8f4d9d26"
},
{
"Identifier": "e8d971a4-ef33-4ea1-8627-f1213e4c67dc"
}
]
}
My end goal is to display the rows that have Information entities with duplicate Identifier values present. However, most of the queries I attempt get an error message of the following form:
Cannot access field Identifier on a value with type ARRAY<STRUCT<Identifier STRING>>
Is there a way to work with the data inside of a STRUCT within an ARRAY?
Here's my first attempt at a query:
SELECT
Session,
Information
FROM
`events.myevents`
WHERE
COUNT(DISTINCT Information.Identifier) != ARRAY_LENGTH(Information.Identifier)
LIMIT
1000
And another using a subquery:
SELECT
Session,
Information
FROM (
SELECT
Session,
Information,
COUNT(DISTINCT Information.Identifier) AS info_count_distinct,
ARRAY_LENGTH(Information) AS info_count
FROM
`events.myevents`
WHERE
COUNT(DISTINCT Information.Identifier) != ARRAY_LENGTH(Information.Identifier)
LIMIT
1000)
WHERE
info_count != info_count_distinct
Try below
SELECT Session, Identifier, COUNT(1) AS dups
FROM `events.myevents`, UNNEST(Information)
GROUP BY Session, Identifier
HAVING dups > 1
ORDER BY Session
Should give you what you expect plus number of dups.
Like below (example)
Session Identifier dups
abc123 e8d971a4-ef33-4ea1-8627-f1213e4c67dc 2
abc345 1c62813f-7ec4-4968-b18b-d1eb8f4d9d26 3