How to modify value in column typeorm - sql

I have 2 tables contractPoint and contractPointHistory
ContractPointHistory
ContractPoint
I would like to get contractPoint where point will be subtracted by pointChange. For example: ContractPoint -> id: 3, point: 5
ContractPointHistory has contractPointId: 3 and pointChange: -5. So after manipulating point in contractPoint should be 0
I wrote this code, but it works just for getRawMany(), not for getMany()
const contractPoints = await getRepository(ContractPoint).createQueryBuilder('contractPoint')
.addSelect('"contractPoint".point + COALESCE((SELECT SUM(cpHistory.point_change) FROM contract_point_history AS cpHistory WHERE cpHistory.contract_point_id = contractPoint.id), 0) AS points')
.andWhere('EXTRACT(YEAR FROM contractPoint.validFrom) = :year', { year })
.andWhere('contractPoint.contractId = :contractId', { contractId })
.orderBy('contractPoint.grantedAt', OrderByDirection.Desc)
.getMany();

The method getMany can be used to select all attributes of an entity. However, if one wants to select some specific attributes of an entity then one needs to use getRawMany.
As per the documentation -
There are two types of results you can get using select query builder:
entities or raw results. Most of the time, you need to select real
entities from your database, for example, users. For this purpose, you
use getOne and getMany. But sometimes you need to select some specific
data, let's say the sum of all user photos. This data is not an
entity, it's called raw data. To get raw data, you use getRawOne and
getRawMany
From this, we can conclude that the query which you want to generate can not be made using getMany method.

Related

Join multiple tables - Flutter - Firebase

I have three collections: Cars, Makes, Models. A car has a make, model and price. The reason why I have separate collections is because I would like to display the makes on their own and models on their own eventually.
I would like to display each car with its make, model and price.
In SQL I can join the tables and compare by Id's. I'm not sure how I can replicate a join in nosql.
in NoSQL, you can't do it in one query. you have to divided to multiple queries :
first query : to get the data from the main table (in your case: Cars module)
second query: to get the data from the joined tables ( Makes, Models )
for example using JavaScript
carTable.on('value', function (snapshot) {
var makeID = snapshot.val().makeID;
makesTable.child('makes').child(makeID).once('value', function(mediaSnap) {
console.log(makeID + ":" + mediaSnap.val());
});
var modelsID = snapshot.val(). modelsID;
modelsTable.child('models').child(modelsID).once('value', function(mediaSnap) {
console.log(modelsID + ":" + mediaSnap.val());
});
});
so, you have to join the tables in 2 steps.

ecto select fields with map function as alias or different name

iex(18)> fields = [:id]
[:id]
iex(19)> fields_map = %{id: :job_id}
%{id: :job_id}
iex(20)> query = from p in Qber.V1.JobModel, where: p.id == 1, select: map(p, ^fields)
#Ecto.Query<from j in Qber.V1.JobModel, where: j.id == 1, select: map(j, [:id])>
I want to select my fields dynamically with AS names on the basis of params I got(which may have some joins as this app is going to have heavy admin dashboard queries). So I need dynamic solution. is there a way I can pass a map instead of list dynamically to map function and ecto will send me the selected result with custom names I passed in map. Instead of
[%{id: 1}]
I want the result to be
[%{job_id: 1}]
Note: I have tried and searched different keywords on the web and I have gone through almost all the docs many times and unable to find solution.

PIG FILTER relation with next row the same relation

i'm searching for a long time now to solve my problem but nearly found nothing helpful.
Hopefully some of you can give me a tip.
I have a relation A with the following format: username, timestamp, ip
For example:
Harald 2014-02-18T16:14:49.503Z 123.123.123.123
Harald 2014-02-18T16:14:51.503Z 123.123.123.123
Harald 2014-02-18T16:14:55.503Z 321.321.321.321
And i want to find out, who changed his ip adress in less then 5 seconds. So the second and the third row should be interesting.
I want do group the relation by username und want to compare the timestamp of the actuall row with the next row. if the ip adress isnt the same and the timestamp is less then 5 seconds bigger, this should be at the output.
could someone help me with that issue?
regards.
first i want to thank you for your time.
but i actually stuck at the Sessionize part.
this is my data comming in:
aoebcu 2014-02-19T14:23:17.503Z 220.61.65.25
aoebcu 2014-02-19T14:23:14.503Z 222.117.144.19
aoebcu 2014-02-19T14:23:14.503Z 222.117.144.19
jekgru 2014-02-19T14:23:14.503Z 213.56.157.109
zmembx 2014-02-19T14:23:12.503Z 199.188.198.91
qhixcg 2014-02-19T14:23:11.503Z 203.40.104.119
and my code till now looks like this:
hijack_Reduced = FOREACH finalLogs GENERATE ClientUserName, timestamp, OriginalClientIP;
hijack_Filtered = FILTER hijack_Reduced BY OriginalClientIP != '-';
hijack_Sessionized = FOREACH (GROUP hijack_Filtered BY ClientUserName) {
views = ORDER hijack_Filtered BY timestamp;
GENERATE FLATTEN(Sessionize(views)) AS (ClientUserName,timestamp,OriginalClientIP,session_id);
}
but when i run this script, i got the following error Message:
15:36:22 ERROR -
org.apache.pig.tools.pigstats.SimplePigStats.setBackendException(542)
| ERROR 0: Exception while executing [POUserFunc (Name:
POUserFunc(datafu.pig.sessions.Sessionize)[bag] - scope-199 Operator
Key: scope-199) children: null at []]:
java.lang.IllegalArgumentException: Invalid format: "aoebcu"
i already tried a lot, but nothing worked.
do you got an idea?
Regards
While you could write a UDF for this, you can actually make use of the UDFs already available in Apache DataFu to solve this.
My solution involves applying sessionization to the data. Basically you look at consecutive events and assign each event a session ID. If the time elapsed between two events exceeds a specified amount of time, in your case 5 seconds, then the next event gets a new session ID. Otherwise consecutive events get the same session ID. Once each event is assigned its session ID the rest is easy. We group by session ID and look for sessions that have more than one distinct IP address.
I'll walk through my solution.
Suppose you have the following input data. Both Harold and Kumar change their IP addresses. But Harold does it within 5 seconds, while Kumar does not. So the output of our script should just be simply "Harold".
Harold,2014-02-18T16:14:49.503Z,123.123.123.123
Harold,2014-02-18T16:14:51.503Z,123.123.123.123
Harold,2014-02-18T16:14:55.503Z,321.321.321.321
Kumar,2014-02-18T16:14:49.503Z,123.123.123.123
Kumar,2014-02-18T16:14:55.503Z,123.123.123.123
Kumar,2014-02-18T16:15:05.503Z,321.321.321.321
Load the data
data = LOAD 'input' using PigStorage(',')
AS (user:chararray,time:chararray,ip:chararray);
Now define a couple UDFs from DataFu. The Sessionize UDF performs sessionization as I described earlier. The DistinctBy UDF will be used to find the distinct IP addresses within each session.
define Sessionize datafu.pig.sessions.Sessionize('5s');
define DistinctBy datafu.pig.bags.DistinctBy('1');
Group the data by user, sort by time, and apply the Sessonize UDF. Note that the timestamp must be the first field, as this is what Sessionize expects. This UDF appends a session ID to each tuple.
data = FOREACH data GENERATE time,user,ip;
data_sessionized = FOREACH (GROUP data BY user) {
views = ORDER data BY time;
GENERATE flatten(Sessionize(views)) as (time,user,ip,session_id);
}
Now that the data is sessionized, we can group by the user and session. I group by user too because I want to spit this value back out. We pass the bag of events into the DistinctBy UDF. Check the documentation of this UDF for a more detailed description. But essentially we will get as many tuples as there are distinct IP addresses per session. Note that I have removed the time from the relation below. This is because 1) it isn't needed, and 2) the DistinctBy in 1.2.0 of DataFu has a bug when handling fields containing dashes, as the time field does.
data_sessionized = FOREACH data_sessionized GENERATE user,ip,session_id;
data_sessionized = FOREACH (GROUP data_sessionized BY (user, session_id)) GENERATE
group.user as user,
SIZE(DistinctBy(data_sessionized)) as distinctIpCount;
Now select all the sessions that had more than one distinct IP address and return the distinct users for these sessions.
data_sessionized = FILTER data_sessionized BY distinctIpCount > 1;
data_sessionized = FOREACH data_sessionized GENERATE user;
data_sessionized = DISTINCT data_sessionized;
This produces simply:
Harold
Here is the full source code, which you should be able to paste directly into the DataFu unit tests and run:
/**
define Sessionize datafu.pig.sessions.Sessionize('5s');
define DistinctBy datafu.pig.bags.DistinctBy('1'); -- distinct by ip
data = LOAD 'input' using PigStorage(',') AS (user:chararray,time:chararray,ip:chararray);
data = FOREACH data GENERATE time,user,ip;
data_sessionized = FOREACH (GROUP data BY user) {
views = ORDER data BY time;
GENERATE flatten(Sessionize(views)) as (time,user,ip,session_id);
}
data_sessionized = FOREACH data_sessionized GENERATE user,ip,session_id;
data_sessionized = FOREACH (GROUP data_sessionized BY (user, session_id)) GENERATE
group.user as user,
SIZE(DistinctBy(data_sessionized)) as distinctIpCount;
data_sessionized = FILTER data_sessionized BY distinctIpCount > 1;
data_sessionized = FOREACH data_sessionized GENERATE user;
data_sessionized = DISTINCT data_sessionized;
STORE data_sessionized INTO 'output';
*/
#Multiline private String sessionizeUserIpTest;
private String[] sessionizeUserIpTestData = new String[] {
"Harold,2014-02-18T16:14:49.503Z,123.123.123.123",
"Harold,2014-02-18T16:14:51.503Z,123.123.123.123",
"Harold,2014-02-18T16:14:55.503Z,321.321.321.321",
"Kumar,2014-02-18T16:14:49.503Z,123.123.123.123",
"Kumar,2014-02-18T16:14:55.503Z,123.123.123.123",
"Kumar,2014-02-18T16:15:05.503Z,321.321.321.321"
};
#Test
public void sessionizeUserIpTest() throws Exception
{
PigTest test = createPigTestFromString(sessionizeUserIpTest);
this.writeLinesToFile("input",
sessionizeUserIpTestData);
List<Tuple> result = this.getLinesForAlias(test, "data_sessionized");
assertEquals(result.size(),1);
assertEquals(result.get(0).get(0),"Harold");
}

Rails - Date Query using group by and count

I'm trying to get the number of calls by commercial week. I've written the sql:
select strftime('%W', date_time_origination) as week, count(*)
from calls
group by week;
I then tried to run this in rails, firstly using this:
calls_by_week = Call.select("strftime('%W',date_time_origination) as week, count(*) as total_number_of_calls").group("week")
This returned a collection of objects with the correct data but repeated multiple times (json representation):
[{ "total_number_of_calls": 5435, "week": 39 },.....]
Given that it should have returned one object, I tried changing the rails query structure:
calls_by_week = Call.select("strftime('%W', date_time_origination) as week").group("strftime('%W', date_time_origination)").count
This time, the query returned one hash, minus the named params:
{ "31": 3123, "32": 1231,... }
I'd like to have the data, in json, presented as follows:
{ "week": 31, "total_number_of_calls": 12412,.....}
Is there a way of eliminating the duplication produced by the first query?
First: if you use Call.select(...) rails will try to instanciate Call objects. What you want are not Call objects, just hashes. You can use Call.connection.execute("SELECT .... blah blah").to_enum.to_a and use the resulting array of arrays.
But to convert your result to hash form, use
returned_hash = { "31": 3123, "32": 1231,... }
what_you_want = returned_hash.collect do |k,v|
{"week" => k.to_i, "total_number_of_calls" => v.to_i}
end
Another way is to use .uniq applied to the query, it will still build Call objects, but at least they will not have repeated data.

Limit models to select

I have a database table called Event which in CakePHP has its relationships coded to like so:
var $belongsTo = array('Sport');
var $hasOne = array('Result', 'Point', 'Writeup', 'Timetable', 'Photo');
Now am doing a query and only want to pull out Sport, Point, and Timetable
Which would result in me retrieving Sports, Events, Points, and Timetable.
Reason for not pulling everything is due the results having 17000+ rows.
Is there a way to only select those tables using:
$this->Event->find('all');
I have had a look at the API but can't see how its done.
You should set recursive to -1 in your app_model and only pull the things you require. never use recursive of 2 and http://book.cakephp.org/view/1323/Containable is awesome.
just $this->Event->find('all', array('contain' => array()));
if you do the trick of recursive as -1 in app_model, this is not needed, if would just be find('all') like you have