Query to filter where value equals #param unless #param is other - sql

I have a filter dropdown list with the following options in it:
1, 2, 3, 4, 5, Other
When the user selects an option I will run a simple SQL query to filter the data by that value such as:
SELECT * FROM Product WHERE Code = #Code
The only problem is that when the "Other" option is selected I need to show everything that does not have a code of 1,2,3,4, or 5.
The data looks like the following:
Id: 1, Name: Product 1, Code: 1
Id: 2, Name: Product 2, Code: 2
Id: 3, Name: Product 3, Code: null
Id: 4, Name: Product 4, Code: 3
Id: 5, Name: Product 5, Code: 12
If the user selects "Other" I need to only display: "Product 3" and "Product 5".

A simple OR condition should accomplish that
SELECT *
FROM Product
WHERE (Code = #Code) OR (#Code = 'Other' AND Code NOT IN (1,2,3,4,5))

Is this what you want?
SELECT *
FROM Product
WHERE (Code = #Code AND #Code IN ('1', '2', '3', '4', '5')) OR
(Code NOT IN ('1', '2', '3', '4', '5') AND #Code = 'Other')
If Code is an integer, then the above may return a type conversion error. In that case, I would recommend:
WHERE Code = TRY_CONVERT(INT, #Code) OR
(Code NOT IN (1, 2, 3, 4, 5) AND #Code = 'Other')

Related

How do I insert and update array columns in Node-Postgres?

I have the following table in Postgres:
_id: integer, user_id: integer, items: Array
I wish to insert the following into the table:
1, 1, [{productId: 1, size: 'large', quantity: 5}]
Next I wish to update the row with the following:
1, 1, [{productId: 1, size: 'small', quantity: 3}]
How do I do this in node-postgres?
Pseudocode:
update cart
set items.quantity = 3
where cart._id = 1
and cart.items.product_id = 1
and cart.items.size='large'

How to I modify arrays inside of a map in Kotlin

I am working with a map with strings as keys and arrays as values. I would like to adjust the map to be the original strings and change the arrays to the average values.
The original map is:
val appRatings = mapOf(
"Calendar Pro" to arrayOf(1, 5, 5, 4, 2, 1, 5, 4),
"The Messenger" to arrayOf(5, 4, 2, 5, 4, 1, 1, 2),
"Socialise" to arrayOf(2, 1, 2, 2, 1, 2, 4, 2)
)
What I have tried to do is:
val averageRatings = appRatings.forEach{ (k,v) -> v.reduce { acc, i -> acc + 1 }/v.size}
However this returns a Unit instead of a map in Kotlin. What am I doing wrong? I am working through a lambda assignment and they want us to use foreach and reduce to get the answer.
You can use forEach and reduce, but it's overkill, because you can just use mapValues and take the average:
val appRatings = mapOf(
"Calendar Pro" to arrayOf(1, 5, 5, 4, 2, 1, 5, 4),
"The Messenger" to arrayOf(5, 4, 2, 5, 4, 1, 1, 2),
"Socialise" to arrayOf(2, 1, 2, 2, 1, 2, 4, 2)
)
val averages = appRatings.mapValues { (_, v) -> v.average() }
println(averages)
Output:
{Calendar Pro=3.375, The Messenger=3.0, Socialise=2.0}
You can do this with mapValues function:
val appRatings = mapOf(
"Calendar Pro" to arrayOf(1, 5, 5, 4, 2, 1, 5, 4),
"The Messenger" to arrayOf(5, 4, 2, 5, 4, 1, 1, 2),
"Socialise" to arrayOf(2, 1, 2, 2, 1, 2, 4, 2)
)
val ratingsAverage = appRatings.mapValues { it.value.average() }
You already got some answers (including literally from JetBrains?? nice) but just to clear up the forEach thing:
forEach is a "do something with each item" function that returns nothing (well, Unit) - it's terminal, the last thing you can do in a chain, because it doesn't return a value to do anything else with. It's basically a for loop, and it's about side effects, not transforming the collection that was passed in and producing different data.
onEach is similar, except it returns the original item - so you call onEach on a collection, you get the same collection as a result. So this one isn't terminal, and you can pop it in a function chain to do something with the current set of values, without altering them.
map is your standard "transform items into other items" function - if you want to put a collection in and get a different collection out (like transforming arrays of Ints into single Int averages) then you want map. (The name comes from mapping values onto other values, translating them - which is why you always get the same number of items out as you put in)

Select within Structs within Arrays in SQL

I'm trying to find rows with N count of identifier A AND M count of identifier B in an array of structs within a Google BigQuery table, using the new Standard SQL. The data in the table (simplified) where each row looks a bit like this:
{
"Session": "abc123",
"Information" [
{
"Identifier": "A",
"Count": 1,
},
{
"Identifier": "B"
"Count": 2,
},
{
"Identifier": "C"
"Count": 3,
}
...
]
}
I've been struggling to work with the struct in an array. Any way I can do that?
Below is for BigQuery Standard SQL
#standardSQL
SELECT *
FROM `project.dataset.table`
WHERE 2 = (SELECT COUNT(1) FROM UNNEST(information) kv WHERE kv IN (('a', 5), ('b', 10)))
If to apply to dummy data as in below example
#standardSQL
WITH `project.dataset.table` AS (
SELECT 'abc123' session, [STRUCT('a' AS identifier, 1 AS `count`), ('b', 2), ('c', 3)] information UNION ALL
SELECT 'abc456', [('a', 5), ('b', 10), ('c', 20)]
)
SELECT *
FROM `project.dataset.table`
WHERE 2 = (SELECT COUNT(1) FROM UNNEST(information) kv WHERE kv IN (('a', 5), ('b', 10)))
result is
Row session information.identifier information.count
1 abc456 a 5
b 10
c 20

Oracle SQL error: maximum number of expressions

Could you please help me regarding that issue getting error in Oracle SQL
ORA-01795 maximum number of expressions in a list is 1000
I'm passing value like
and test in (1, 2, 3.....1000)
Try to split your query with multiple in clauses like below
SELECT *
FROM table_name
WHERE test IN (1,2,3,....500)
OR test IN (501, 502, ......1000);
You can try workarounds:
Split single in into several ones:
select ...
from ...
where test in (1, ..., 999) or
test in (1000, ..., 1999) or
...
test in (9000, ..., 9999)
Put values into a (temporary?) table, say TestTable:
select ...
from ...
where test in (select TestField
from TestTable)
Edit: As I can see, the main difficulty is to build such a query. Let's implement it in C#. We are given a collection of ids:
// Test case ids are in [1..43] range
IEnumerable<int> Ids = Enumerable.Range(1, 43);
// Test case: 7, in actual Oracle query you, probably set it to 100 or 1000
int chunkSize = 7;
string fieldName = "test";
string filterText = string.Join(" or " + Environment.NewLine, Ids
.Select((value, index) => new {
value = value,
index = index
})
.GroupBy(item => item.index / chunkSize)
.Select(chunk =>
$"{fieldName} in ({string.Join(", ", chunk.Select(item => item.value))})"));
if (!string.IsNullOrEmpty(filterText))
filterText = $"and \r\n({filterText})";
string sql =
$#"select MyField
from MyTable
where (1 = 1) {filterText}";
Test:
Console.Write(sql);
Outcome:
select MyField
from MyTable
where (1 = 1) and
(test in (1, 2, 3, 4, 5, 6, 7) or
test in (8, 9, 10, 11, 12, 13, 14) or
test in (15, 16, 17, 18, 19, 20, 21) or
test in (22, 23, 24, 25, 26, 27, 28) or
test in (29, 30, 31, 32, 33, 34, 35) or
test in (36, 37, 38, 39, 40, 41, 42) or
test in (43))

Create, and insert into, an Aerospike ordered map from Python

I see documentation for appending to a list in Aerospike, from Python, namely:
key = ('test', 'demo', 1)
rec = {'coutry': 'India', 'city': ['Pune', 'Dehli']}
client.put(key, rec)
client.list_append(key, 'city', 'Mumbai')
However I don't know how to add elements to a map in Aerospike, from Python, and I also don't know how to define said map as sorted.
Essentially I am trying to model a time series as follows:
ticker1: {intepochtime1: some_number, intepochtime2: some_other_number,...}
ticker2: {intepochtime1: some_number, intepochtime2: some_other_number,...}
........
where the tickers are the record keys, so are indexed obviously, but also where the intepochtimes are integer JS-style integer timestamps and are also indexed by virtue of being stored in ascending or descending order and therefore easily range-queryable. How is this doable from Python?
Here is some sample code to get you started:
Also on github: https://github.com/pygupta/aerospike-discuss/tree/master/stkovrflo_Py_SortedMaps
import aerospike
from aerospike import predicates as p
def print_result((key, metadata, record)):
print(record)
config = { 'hosts': [ ("localhost", 3000), ] }
client = aerospike.client(config).connect()
map_policy={'map_order':aerospike.MAP_KEY_VALUE_ORDERED}
# Insert the records
key = ("test", "demo", 'km1')
client.map_set_policy(key, "mymap", map_policy)
client.map_put(key, "mymap", '0', 13)
client.map_put(key, "mymap", '1', 3)
client.map_put(key, "mymap", '2', 7)
client.map_put(key, "mymap", '3', 2)
client.map_put(key, "mymap", '4', 12)
client.map_put(key, "mymap", '5', 33)
client.map_put(key, "mymap", '6', 1)
client.map_put(key, "mymap", '7', 12)
client.map_put(key, "mymap", '8', 22)
# Query for sorted value
print "Sorted by values, 2 - 14"
ret_val = client.map_get_by_value_range(key, "mymap", 2, 14, aerospike.MAP_RETURN_VALUE)
print ret_val
#get first 3 indexes
print "Index 0 - 3"
ret_val2 = client.map_get_by_index_range(key, "mymap", 0, 3, aerospike.MAP_RETURN_VALUE)
print ret_val2
pgupta#ubuntu:~/discussRepo/aerospike-discuss/stkovrflo_Py_SortedMaps$ python sortedMapExample.py
Sorted by values, 2 - 14
[2, 3, 7, 12, 12, 13]
Index 0 - 3
[13, 3, 7]
Look at Python documentation for Client.
Must be ver 3.8.4+
Create map policy :
Define one of the key ordered or key value ordered policies
http://www.aerospike.com/apidocs/python/client.html#map-policies for map_order
Put map type bin but first define the map policy.
http://www.aerospike.com/apidocs/python/client.html#id1
see map_set_policy(key, bin, map_policy)
then map_put()
Sorted maps are just regular maps but with map_order policy.
python3 mem leak fixed in client ver 2.0.8.