Getting map's key and value - aerospike

I'm using C client library to fetch a map from Aerospike. I got the code to implement the map iterator:
as_hashmap_iterator it;
as_hashmap_iterator_init(&it, &map);
while ( as_hashmap_iterator_has_next(&it) )
{
const as_val * val = as_hashmap_iterator_next(&it);
}
however, I don't know how to fetch the key and value from as_val? Are there any functions like as_map_get_key(iterator) and as_map_get_value(iterator)?

Cast it to (as_pair*). You can refer to this bit of code.
This was cross posted here as well.

Related

Slick plain sql query with pagination

I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.

GORM many2many preload error

Currently using GORM to connect to two databases: POSTGRES AND sqlite (using a code switch to choose which one to use). I have a 2 database tables defined in my schema that look like this:
type TableClient struct {
Model
Synchronised bool
FacilityID string `gorm:"primary_key"`
Age int
ClientSexID int
MaritalStatusID int
SpecificNeeds []TableOptionList`gorm:"many2many:options_specific_needs"`
}
type TableOptionList struct {
ID int `gorm:"primary_key"`
Name string
Value string
Text string
SortKey int
}
Previously, I would preload related table with code like this:
var dbClient TableClient
Db.Where("facility_id = ? AND client_id = ? AND id = ?;", URLFacilityID, URLClientID, URLIncidentID).
Preload("ClientSex").
Preload("MaritalStatus").
Preload("CareTakerRelationShip").
Preload("HighestLevelOfEducation").
Preload("Occupation").
Preload("SpecificNeeds").
First(&dbClient)
Now that lookup fails with a syntax error and when I look at the SQL generated, it shows the following SQL is generated:
SELECT * FROM "table_option_lists" INNER JOIN "options_specific_needs" ON "options_specific_needs"."table_option_list_id" = "table_option_lists"."id" WHERE (("options_specific_needs"."table_client_id","options_specific_needs"."table_client_facility_id") IN (('one','LS034')))
Pasting that into the sqlite console also fails with the same error: (near ",": syntax error)
The crux of your issue is in your WHERE clause, you need to use "OR", not ",".
This is likely an issue with GORM and you could open a GitHub issue with them for this. You can get around the issue using db.Raw() from GORM. In general, I prefer to avoid ORMs. Part of the reason is it is often easier to just build an SQL query by hand than try to figure out the strange way to do it in the ORM (for things beyond basic CRUD).

Slick Plain Sql Generic Return Type

I am trying to write a configurable sql query executor using Slick. User provides a prepared statement with ? and at run time the exact query is formed by replacing ? with values.
Generally this is how one would run a plain sql query using slick.
val query = sql"#$queryString".as[(String,Int)]
In my case i would not know the result type so i want to get back a generic result type. Maybe a List of Tuples with each tuple representing a row of result SET.
Any ideas on how this would be done?
I found a solution from one of the scala git issues. Here it is
ResultMap extends GetResult[Map[String, Any]] {
def apply(pr: PositionedResult) = {
val resultSet = pr.rs
val metaData = resultSet.getMetaData();
(1 to pr.numColumns).map { i =>
metaData.getColumnName(i) -> resultSet.getObject(i)
}.toMap
}
and then we can simply do val query = sql"#$queryString".as(ResultMap)
Hope it helps!!

What is the ideal way to select rows by projected keys in Groundhog Haskell

I'm trying to create a many to many table using Groundhog in Haskell, which basically looks like this if I cut away all the other logic I have:
data FooRow = FooRow {
fooRowUUID :: UUID
}
deriving instance Show FooRow
data BarRow = BarRow {
barRowUUID :: UUID
}
deriving instance Show BarRow
data FooToBarRow = FooToBarRow {
fooToBarRowUUID :: UUID,
fooToBarRowFoo :: DefaultKey FooRow,
fooToBarRowBar :: DefaultKey BarRow
}
deriving instance Show FooToBarRow
Now, trying to define operations, I can get and insert all of these records just fine, however I'm not sure how to go from having a FooRow, with it's ID, and then get all the related BarRows by way of the many to many table. Right now I've played with something like this:
getBarsForFoo fooID = do
barKeys <- project
(FooToBarRowBarField)
(FooToBarRowFooField ==. (Foo_FooKey fooID))
select $ (BarRowUUIDField `in_` barKeys)
However this doesn't typecheck, with the error:
Couldn't match type 'UUID' with 'BarRow'
Inspecting just the results of the project with putStrLn, I can see that the type of barKeys is:
[Bar_BarKey UUID]
but I don't quite understand how to make use of that within my query. I don't see any examples like this in the Groundhog documentation, so I'm hoping someone will be able to put me on the right path here.
I'm quite certain there are more efficient ways to go about this (there's going to be a bunch of underlying queries with this approach), but this does at at least get the job done for now.
getBarsForFoo fooID = do
barKeys <- project
(FooToBarRowBarField)
(FooToBarRowFooField ==. (Foo_FooKey fooID))
q <- mapM (getBy) barKeys
return (catMaybes q :: [BarRow])

LINQ display row numbers

I simply want to include a row number against the returned results of my query.
I found the following post that describes what I am trying to achieve but gives me an exception
http://vaultofthoughts.net/LINQRowNumberColumn.aspx
"An expression tree may not contain an assignment operator"
In MS SQL I would just use the ROWNUMBER() function, I'm simply looking for the equivalent in LINQ.
Use AsEnumerable() to evaluate the final part of your query on the client, and in that final part add a counter column:
int rowNo = 0;
var results = (from data in db.Data
// Add any processing to be performed server side
select data)
.AsEnumerable()
.Select(d => new { Data = d, Count = ++rowNo });
I'm not sure whether LINQ to SQL supports it (but it propably will), but there's an overload to the Queryable.Select method that accepts an lambda with an indexer. You can write your query as follows:
db.Authors.Select((author, index) => new
{
Lp = index, Name = author.Name
});
UPDATE:
I ran a few tests, but unfortunately LINQ to SQL does not support this overload (both 3.5sp1 and 4.0). It throws a NotSupportedException with the message:
Unsupported overload used for query
operator 'Select'.
LINQ to SQL allows you to map a SQL function. While I've not tested this, I think this construct will work:
public partial class YourDataContext : DatContext
{
[Function(Name = "ROWNUMBER")]
public int RowNumber()
{
throw InvalidOperationException("Not called directly.");
}
}
And write a query as follows:
from author in db.Authors
select new { Lp = db.RowNumber(), Name = author.Name };