I have one model:
class Model_Alumno extends Model_Table {
public $entity_code='alumno';
function init(){
parent::init();
$this->defineAuditFields();
$this->addField('name')->caption('Nombre del Alumno')->mandatory(true);
$this->addField('is_inscrito')->type('boolean')->calculated(true);
}
function calculate_is_inscrito(){
what goes here????
}
}
I wanto to calculate is_inscrito as Y or N, if a record with it's id exists in some other table, so I can use an SQL like this:
SELECT IF( (SELECT count(*) FROM programaPago, alumno WHERE
alumno_id=CORRESPONDING ID)>0, 'Y', 'N')
How can I write the calculate_is_inscrito function?
function calculate_is_inscrito(){
return "IF( (SELECT count(*) FROM programaPago, alumno WHERE alumno_id=".
($this->table_alias?:$this->entity_code).".id)>0, 'Y', 'N')";
}
Related
I have a table with columns: timestamp and id and condition, and I want to count the number of each id per interval such as 10 seconds.
If condition is true, the count++, otherwise return the previous value.
the udaf code like:
public class MyCount extends UserDefinedAggregateFunction {
#Override
public StructType inputSchema() {
return DataTypes.createStructType(
Arrays.asList(
DataTypes.createStructField("condition", DataTypes.BooleanType, true),
DataTypes.createStructField("timestamp", DataTypes.LongType, true),
DataTypes.createStructField("interval", DataTypes.IntegerType, true)
)
);
}
#Override
public StructType bufferSchema() {
return DataTypes.createStructType(
Arrays.asList(
DataTypes.createStructField("timestamp", DataTypes.LongType, true),
DataTypes.createStructField("count", DataTypes.LongType, true)
)
);
}
#Override
public DataType dataType() {
return DataTypes.LongType;
}
#Override
public boolean deterministic() {
return true;
}
#Override
public void initialize(MutableAggregationBuffer mutableAggregationBuffer) {
mutableAggregationBuffer.update(0, 0L);
mutableAggregationBuffer.update(1, 0L);
}
public void update(MutableAggregationBuffer mutableAggregationBuffer, Row row) {
long timestamp = mutableAggregationBuffer.getLong(0);
long count = mutableAggregationBuffer.getLong(1);
long event_time = row.getLong(1);
int interval = row.getInt(2);
if (event_time > timestamp + interval) {
timestamp = event_time - event_time % interval;
count = 0;
}
if (row.getBoolean(0)) {
count++;
}
mutableAggregationBuffer.update(0, timestamp);
mutableAggregationBuffer.update(1, count);
}
#Override
public void merge(MutableAggregationBuffer mutableAggregationBuffer, Row row) {
}
#Override
public Object evaluate(Row row) {
return row.getLong(1);
}
}
Then I sumbit a sql like:
select timestamp, id, MyCount(true, timestamp, 10) over(PARTITION BY id ORDER BY timestamp) as count from xxx.xxx
the result is:
timestamp id count
1642760594 0 1
1642760596 0 2
1642760599 0 3
1642760610 0 2 --duplicate
1642760610 0 2
1642760613 0 3
1642760594 1 1
1642760597 1 2
1642760600 1 1
1642760603 1 2
1642760606 1 4 --duplicate
1642760606 1 4
1642760608 1 5
When the timestamp is repeated, I get 1,2,4,4,5 instead of 1,2,3,4,5
How to fix it?
And another requestion is that when to execute the merge method of udaf? I empty implement it but it runs normally. I try to add the log in the method but I haven't seen this log. Is it really necessary?
There is a similar question: Apache Spark SQL UDAF over window showing odd behaviour with duplicate input
However, row_number() does not have such a problem. row_number() is a hive udaf, then I try to create a hive udaf. But I also have the problem...Why hive udaf row_number() terminate() returns 'ArrayList'? I create my udaf row_number2() by copying its code then I got list return?
Finally I solved it by spark aggregateWindowFunction:
case class Count(condition: Expression) extends AggregateWindowFunction with Logging {
override def prettyName: String = "myCount"
override def dataType: DataType = LongType
override def children: Seq[Expression] = Seq(condition)
private val zero = Literal(0L)
private val one = Literal(1L)
private val count = AttributeReference("count", LongType, nullable = false)()
private val increaseCount = If(condition, Add(count, one), count)
override val initialValues: Seq[Expression] = zero :: Nil
override val updateExpressions: Seq[Expression] = increaseCount :: Nil
override val evaluateExpression: Expression = count
override val aggBufferAttributes: Seq[AttributeReference] = count :: Nil
Then use spark_session.functionRegistry.registerFunction to register it.
"select myCount(true) over(partition by window(timestamp, '10 seconds'), id order by timestamp) as count from xxx"
How can I do the following sql statement in Slick. The issue is that in the select statement there is filter and I don't know how to do that in Slick.
SELECT Sellers.ID,
COALESCE(count(DISTINCT Produce.IMPORTERID) FILTER (WHERE Produce.CREATED > '2019-04-30 16:38:00'), 0::int) AS AFTERDATE,
COALESCE(count(DISTINCT Produce.IMPORTERID) FILTER (WHERE Produce.NAME::text = 'Apple'::text AND Produce.CREATED > '2018-01-30 16:38:00'), 0::bigint) AS APPLES
FROM Sellers
JOIN Produce ON Produce.SellersID = Sellers.ID
WHERE Sellers.ID = 276
GROUP BY Sellers.ID;
Try
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
import slick.jdbc.PostgresProfile.api._
case class Seller(id: Long)
case class Produce(name: String, sellerId: Long, importerId: Long, created: LocalDateTime)
class Sellers(tag: Tag) extends Table[Seller](tag, "Sellers") {
def id = column[Long]("ID", O.PrimaryKey)
def * = id <> (Seller.apply, Seller.unapply)
}
class Produces(tag: Tag) extends Table[Produce](tag, "Produce") {
def name = column[String]("NAME", O.PrimaryKey)
def sellerId = column[Long]("SellersID")
def importerId = column[Long]("IMPORTERID")
def created = column[LocalDateTime]("CREATED")
def * = (name, sellerId, importerId, created) <> (Produce.tupled, Produce.unapply)
}
val sellers = TableQuery[Sellers]
val produces = TableQuery[Produces]
val dtf = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")
val ldt2019 = LocalDateTime.parse("2019-04-30 16:38:00", dtf)
val ldt2018 = LocalDateTime.parse("2018-01-30 16:38:00", dtf)
sellers.join(produces).on(_.id === _.sellerId)
.filter { case (s, p) => p.sellerId === 276L }
.groupBy { case (s, p) => s.id }
.map { case (sid, group) =>
(
sid,
group
.filter { case (s, p) => p.created > ldt2019 }
.map { case (s, p) => p.importerId }
.distinct.length,
group
.filter { case (s, p) => p.name === "Apple" && p.created > ldt2018 }
.map { case (s, p) => p.importerId }
.distinct.length
)
}
libraryDependencies += "com.github.tminglei" %% "slick-pg" % "0.18.0"
I really hope something like #Dymytro's answer can work, but from my testing it all comes down to limitations with the GROUP BY, and here are the issues you will run into:
Trying to use just Slick with a Postgres driver won't work because Slick doesn't support aggregate functions with a FILTER clause. Postgres is one of the few databases that supports FILTER! So you won't get far:
someQuery
.groupBy { a => a.pivot }
.map{ case (pivot, query) =>
(
pivot,
query
.filter(_.condition === "stuff")
.map(_.column).distinct.length
)
}
Although it compiles, you'll get some kind of runtime error like:
[ERROR] slick.SlickTreeException: Cannot convert node to SQL Comprehension
Then, if you check out slick-pg you'll notice it has support for Postgres aggregate functions! Including the FILTER clause! But... there's an open issue for aggregate functions with GROUP BY so this sort of attempt will fail too:
import com.github.tminglei.slickpg.agg.PgAggFuncSupport.GeneralAggFunctions._
...
someQuery
.groupBy { a => a.pivot }
.map{ case (pivot, query) =>
(
pivot,
query
.map(a => count(a.column.distinct).filter(a.condition === "stuff"))
)
}
No matching Shape found.
[error] Slick does not know how to map the given types.
So until those issues are resolved or someone posts a work around, luckily simple single column FILTER expressions can be equivalently implemented with the more primitive CASE statements. Though not as pretty, it will work!
val caseApproach = someQuery
.groupBy { a => a.pivot }
.map{ case (pivot, query) =>
(
pivot,
query
.map{ a =>
Case If a.condition === "stuff" Then a.column
}.min //here's where you add the aggregate, e.g. "min"
)
}
println(caseApproach.result.statements.headOption)
select pivot, min((case when ("condition" = 'stuff') then "column" end)) from table group by pivot;
I have three models:
Rental.php
class Rental extends Model
{
use SoftDeletes;
public function rentalItem()
{
return $this->hasMany('App\Models\RentalItem');
}
}
RentalItem.php
class RentalItem extends Model
{
public function rentalAsset()
{
return $this->belongsTo('App\Models\RentalAsset');
}
public function rental()
{
return $this->belongsTo('App\Models\Rental');
}
}
and RentalAsset.php
class RentalAsset extends Model
{
public function rentalItem()
{
return $this->hasMany('App\Models\RentalItem');
}
}
One Rental can have many RentalItems which belongs to RentalAsset.
RentalAsset is product card for each RentalItem so one RentalItem can have one RentalAsset.
When RentalItem is created, first it needs to check if it is available in date interval. This done by checking whether between date_from and date_two is some Rental which has RentalItem related to RentalAsset.
I want a query, which will return all RentalAssets which are not related to any RentalItem being related to Rental during those dates.
Unfortunately, this is not working right:
$freeAssets = RentalAsset::where('rental_asset_category_id', '=', $request->rental_asset_category_id)
->whereDoesntHave('rentalItem.rental', function($query) use($date_from, $date_to)
{
$query->where('date_from', '<=', $date_to);
$query->where('date_to', '>=', $date_from);
})
->get();
Your help is highly appreciated!
Thanks a lot.
UPDATE:
using Laravel 5.6
UPDATE 2:
I sput out generated select by the provided eloquent query:
select * from `rental_assets` where `rental_asset_category_id` = ? and exists
(select * from `rental_items` where `rental_assets`.`id` = `rental_items`.`rental_asset_id` and not exists
(select * from `rentals` where `rental_items`.`rental_id` = `rentals`.`id`
and `date_from` <= ? and `date_to` >= ? and `rentals`.`deleted_at` is null))
and `rental_assets`.`deleted_at` is null
And this select returns what I need:
select * from `rental_assets` where `rental_asset_category_id` = 2
and not exists (
select * from `rental_items` where `rental_assets`.`id` = `rental_items`.`rental_asset_id` and exists
(select * from `rentals` where `rental_items`.`rental_id` = `rentals`.`id`
and `date_from` <= '2018-12-12' and `date_to` >= '2018-01-01' and `rentals`.`deleted_at` is null))
and `rental_assets`.`deleted_at` is null;
What is the correct eloquent query? I would rather that then raw query.
thanks.
This is probably what you need here:
$freeAssets = RentalAsset::where('rental_asset_category_id', $request->rental_asset_category_id)
->whereDoesntHave('rentalItem', function($query) use ($date_from, $date_to) {
$query->whereHas('rental', function($query) use ($date_from, $date_to) {
$query->where('date_from', '<=', $date_to);
$query->where('date_to', '>=', $date_from);
});
})->get();
consider the below example...
class Customers extends ActiveRecord
{
public function getOrders()
{
return $this->hasMany(Orders::className(), ['customer_id' => 'id']);
}
public function getOrderItems()
{
return $this->hasMany(OrderItems::className(), ['order_id' => 'id'])
->via('orders');
}
}
how i can generate any one of the follwing query from the getOrderItems() relation
SELECT * FROM `order-items`
LEFT JOIN `orders` ON `orders`.`id` = `order-items`.`order_id`
LEFT JOIN `customers` ON `customer`.`id` = `orders`.`customer_id`
OR
SELECT `order-items`.* FROM `order-items`,`orders`,`customers`
WHERE `customer`.`id` = `orders`.`customer_id` AND `orders`.`id` = `order-items`.`order_id`
OR
SELECT * FROM `order-items` WHERE `order_id` IN(
SELECT * FROM `orders` WHERE `customer_id` IN(
SELECT * FROM `customers`
)
)
i use the following code to do this.
$customers = Customers::findAll();
$query = $customers[0]->getOrderItems()->createCommand()->rawSql;
but it only generates
SELECT * FROM `order-items`
What to do...???
use this :
$q = Customers::findAll()->innerJoinWith('orderItems')->createCommand()->rawSql;
you have to use relation name like this not as function
When i run a select after a number of joins on my table I have an output of 2 columns and I want to select a distinct combination of col1 and col2 for the rowset returned.
the query that i run will be smthing like this:
select a.Col1,b.Col2 from a inner join b on b.Col4=a.Col3
now the output will be somewhat like this
Col1 Col2
1 z
2 z
2 x
2 y
3 x
3 x
3 y
4 a
4 b
5 b
5 b
6 c
6 c
6 d
now I want the output should be something like follows
1 z
2 y
3 x
4 a
5 b
6 d
its ok if I pick the second column randomly as my query output is like a million rows and I really dnt think there will be a case where I will get Col1 and Col2 output to be same even if that is the case I can edit the value..
Can you please help me with the same.. I think basically the col3 needs to be a row number i guess and then i need to selct two cols bases on a random row number.. I dont know how do i transalte this to SQL
consider the case 1a 1b 1c 1d 1e 2a 2b 2c 2d 2e now group by will give me all these results where as i want 1a and 2d or 1a and 2b. any such combination.
OK let me explain what im expecting:
with rs as(
select a.Col1,b.Col2,rownumber() as rowNumber from a inner join b on b.Col4=a.Col3)
select rs.Col1,rs.Col2 from rs where rs.rowNumber=Round( Rand() *100)
now I am not sure how do i get the rownumber or the random working correctly!!
Thanks in advance.
If you simply don't care what col2 value is returned
select a.Col1,MAX(b.Col2) AS Col2
from a inner join b on b.Col4=a.Col3
GROUP BY a.Col1
If you do want a random value you could use the approach below.
;WITH T
AS (SELECT a.Col1,
b.Col2
ROW_NUMBER() OVER (PARTITION BY a.Col1 ORDER BY (SELECT NEWID())
) AS RN
FROM a
INNER JOIN b
ON b.Col4 = a.Col3)
SELECT Col1,
Col2
FROM T
WHERE RN = 1
Or alternatively use a CLR Aggregate function. This approach has the advantage that it eliminates the requirement to sort by partition, newid() an example implementation is below.
using System;
using System.Data.SqlTypes;
using System.IO;
using System.Security.Cryptography;
using Microsoft.SqlServer.Server;
[Serializable]
[SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = 8000)]
public struct Random : IBinarySerialize
{
private MaxSoFar _maxSoFar;
public void Init()
{
}
public void Accumulate(SqlString value)
{
int rnd = GetRandom();
if (!_maxSoFar.Initialised || (rnd > _maxSoFar.Rand))
_maxSoFar = new MaxSoFar(value, rnd) {Rand = rnd, Value = value};
}
public void Merge(Random group)
{
if (_maxSoFar.Rand > group._maxSoFar.Rand)
{
_maxSoFar = group._maxSoFar;
}
}
private static int GetRandom()
{
var buffer = new byte[4];
new RNGCryptoServiceProvider().GetBytes(buffer);
return BitConverter.ToInt32(buffer, 0);
}
public SqlString Terminate()
{
return _maxSoFar.Value;
}
#region Nested type: MaxSoFar
private struct MaxSoFar
{
private SqlString _value;
public MaxSoFar(SqlString value, int rand) : this()
{
Value = value;
Rand = rand;
Initialised = true;
}
public SqlString Value
{
get { return _value; }
set
{
_value = value;
IsNull = value.IsNull;
}
}
public int Rand { get; set; }
public bool Initialised { get; set; }
public bool IsNull { get; set; }
}
#endregion
#region IBinarySerialize Members
public void Read(BinaryReader r)
{
_maxSoFar.Rand = r.ReadInt32();
_maxSoFar.Initialised = r.ReadBoolean();
_maxSoFar.IsNull = r.ReadBoolean();
if (_maxSoFar.Initialised && !_maxSoFar.IsNull)
_maxSoFar.Value = r.ReadString();
}
public void Write(BinaryWriter w)
{
w.Write(_maxSoFar.Rand);
w.Write(_maxSoFar.Initialised);
w.Write(_maxSoFar.IsNull);
if (!_maxSoFar.IsNull)
w.Write(_maxSoFar.Value.Value);
}
#endregion
}
You need to group by a.Col1 to get distinct by only a.Col1, then since b.Col2 is not included in the group you need to find a suitable aggregate function to reduce all values in the group to just one, MIN is good enough if you just want one of the values.
select a.Col1, MIN(b.Col2) as c2
from a
inner join b on b.Col4=a.Col3
group by a.Col1
If I understand you correctly, you want to have one line for each combination in column 1 and 2. That can easily be done by using GROUP BY or DISTINCT
for instance:
SELECT col1, col2
FROM Your Join
GROUP BY col1, col2
You must use a group by clause :
select a.Col1,b.Col2
from a
inner join b on b.Col4=a.Col3
group by a.Col1