Outputting values not equal to certain values in yii2 - yii

I would like to output variables not equal to certain values but it returns an error of
Failed to prepare SQL: SELECT * FROM `tblsuunit` WHERE `unitid` != :qp0
There are two models the first model where am getting the array of ids
public function actionSunits($id){
$unitslocation = new Unitslocation();
$id2 = Unitslocation::find()->where(['officelocationid'=>$id])->all();
foreach( $id2 as $ids){
print_r($ids['unitid']."<br>");
}
}
This outputs the ids as
8
9
11
12
13
14
16
I would then like to take the id and compare another model(units model) and get the id values not similar to the above and output then
So i have added
$idall = Units::find()->where(['!=', 'unitid', $ids])->all();
So the whole controller action becomes
public function actionSunits($id){
$unitslocation = new Unitslocation();
$id2 = Unitslocation::find()->where(['officelocationid'=>$id])->all();
foreach( $id2 as $ids){
$idall = Units::find()->where(['!=', 'unitid', $ids])->all();
}
var_dump($idall);
}
This is the units model table:
If it were working it should return 7 and 10
What could be wrong..

You should fix your code and simply use a not in condition, e.g. :
// $uls will be an array of Unitslocation objects
$uls = Unitslocation::find()->where(['officelocationid'=>$id])->all();
// $uids will contain the unitids
$uids = \yii\helpers\ArrayHelper::getColumn($uls, 'unitid');
// then simply use a not in condition
$units = Units::find()->where(['not in', 'unitid', $uids])->all();
$idall = \yii\helpers\ArrayHelper::getColumn($units, 'unitid');
Read more about ActiveQuery::where() and ArrayHelper::getColumn().

Try with:
$idall = Units::find()->where(['not in','unitid',$ids])->all();
Info: https://github.com/yiisoft/yii2/blob/master/docs/guide/db-query-builder.md
operand 1 should be a column or DB expression. Operand 2 can be either
an array or a Query object.

Related

Filter data from arrays

What I need is to sort data I get from an API into different arrays but add '0' value where there is no value at 1 type but is at the other type/s. Now is this possible with array.filter since its faster then a bunch of for and if loops ?
So let's say I get following data from SQL to the API:
Day Type Amount
-----------------------
12.1.2022 1 11
12.1.2022 2 4
13.1.2022 1 5
14.1.2022 2 9
16.1.2022 2 30
If I run this code :
this.data = result.Data;
let date = [];
const data = { 'dataType1': [], 'dataType2': [], 'dataType3': [], 'dataType4': [] }
/*only writing example for 2 types since for 4 it would be too long but i desire
answer that works for any amount of types or for 4 types */
this.data.forEach(x => {
var lastAddress = date[date.length - 1]
if (x.type == 1) {dataType1.push(x.Amount) }
if (x.type == 2) {dataType2.push(x.Amount) }}
lastAddress != x.Day ? date.push(x.Day) : '';
The array I get for type1 is [11,5]
and for type2 I get [4,9,30].
And for dates i get all the unique dates.
But the data I would like is: [11,5,0,0] and [4,0,9,30]
The size of array also has to match the size of Day array at the end.
which would be unique dates.. in this case:
[12.1.2022, 13.1.2022, 14.1.2022, 16.1.2022]
I have already tried to solve this with some for, if and while loops but it gets way too messy, so I'm looking for an alternative.
Also i have 4 types but for reference i only wrote sample for 2.
you can
first get the uniq values
loop over the data to create an array of object with
{date:string,values:number[]}
//create a function:
matrix(data:any[])
{
const uniqTypes=data.reduce((a,b)=>a.indexOf(b.type)>=0?a:
[...a,b.type],[])
const result=[]
this.data.forEach((x:any)=>{
let index=result.findIndex(r=>r.date==x.date)
if (index<0)
{
result.push({date:x.date,values:[...uniqTypes].fill(0)})
index=result.length-1;
}
result[index].values[uniqTypes.indexOf(x.type)]=x.amount
})
return result
}
//and use like
result=this.matrix(this.data);
NOTE: You can create the uniqType outside the function as variable and pass as argument to the function
stackblitz
const type1 = [];
const type2 = [];
data.forEach(item=>{
if(item.type==1){
type1.push(item.amount)
type2.push(0)
}else{
type1.push(0)
type2.push(item.amount)
}
})
console.log(type1)
console.log(type2)

Invalid parameter number when using whereNotIn in laravel

I want to print the list of option from database on my blade, which I want to get all list except the option that already selected. So what I've already do is getting the selected option (which is saved on another table) as an array:
$ctg_id[] = [];
foreach($vendor->category as $item){
$ctg_id[] = [$item->category_id];
}
Then finally print it:
$category = ItemCategory::whereNotIn('id',$ctg_id)->get();
But it return an error that says:
"SQLSTATE[HY093]: Invalid parameter number (SQL: select * from category_item where id not in (4, 3, ?))" Is there anything wrong with my code?
You are using double dimensional array, you need to change it to one dimensional:
$ctg_id = [];
foreach($vendor->category as $item){
$ctg_id[] = $item->category_id;
}
Recommend to use method-pluck like this:
$category = ItemCategory::whereNotIn('id', $vendor->category->pluck('category_id'))->get();

Iterate through a column in Dataset which have array of key value pairs and find out a pair with max value

I have data in a dataframe , which was obtained from azure eventhub.
Then I convert this data to json object and stored the required data into a dataset as shown below.
Code for obtaining data from eventhub and store it into a dataframe.
val connectionString = ConnectionStringBuilder(<ENDPOINT URL>)
.setEventHubName(<EVENTHUB NAME>).build
val currTime = Instant.now
val ehConf = EventHubsConf(connectionString)
.setConsumerGroup("<CONSUMER GRP>")
.setStartingPosition(EventPosition
.fromEnqueuedTime(currTime.minus(Duration.ofMinutes(30))))
.setEndingPosition(EventPosition.fromEnqueuedTime(currTime))
val reader = spark.read.format("eventhubs").options(ehConf.toMap).load()
var SIGNALS = reader
.select(get_json_object(($"body").cast("string"),"$.NUM").alias("NUM"),
get_json_object(($"body").cast("string"),"$.SIG1").alias("SIG1"),
get_json_object(($"body").cast("string"),"$.SIG2").alias("SIG2"),
get_json_object(($"body").cast("string"),"$.SIG3").alias("SIG3"),
get_json_object(($"body").cast("string"),"$.SIG4").alias("SIG4")
)
val SIGNALSFiltered = SIGNALS.filter(col("SIG1").isNotNull &&
col("SIG2").isNotNull && col("SIG3").isNotNull && col("SIG4").isNotNull)
The data obtained at SIGNALSFiltered is shown below.
+-----------------+--------------------+--------------------+--------------------+--------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+--------------------+--------------------+--------------------+--------------------+
|XXXXX01|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX02|[{"TIME":15695604780...|[{"TIME":15695604780...|[{"TIME":15695604780...|[{"TIME":15695604780...|
|XXXXX03|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX04|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX05|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX06|[{"TIME":15695605340...|[{"TIME":15695605340...|[{"TIME":15695605340...|[{"TIME":15695605340...|
|XXXXX07|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX08|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
If we check entire data for a single row it will be as below.
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825},{"TIME":1569560475000,"VALUE":3.7812},{"TIME":1569560483000,"VALUE":3.7812},{"TIME":1569560491000,"VALUE":34.7875}]|
[{"TIME":1569560537000,"VALUE":3.7825},{"TIME":1569560481000,"VALUE":34.7825},{"TIME":1569560489000,"VALUE":34.7825},{"TIME":1569560497000,"VALUE":34.7825}]|
[{"TIME":1569560505000,"VALUE":34.7825},{"TIME":1569560513000,"VALUE":34.7825},{"TIME":1569560521000,"VALUE":34.7825},{"TIME":1569560527000,"VALUE":34.7825}]|
[{"TIME":1569560535000,"VALUE":34.7825},{"TIME":1569560479000,"VALUE":34.7825},{"TIME":1569560487000,"VALUE":34.7825}]
I want only the highest TIME pair from each column, not the entire TIME VALUE pairs. Output should be as shown below.
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":4.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":5.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":6.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":7.7825}]|
|XXXXX03|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":9.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":8.7825}]|
How to Iterate through each column in each row and get the highest TIME-VALUE pair?
After getting highest in each columns (SIG1,....SIG4) have to update only the value of TIME in all columns with highest among them.
Is there Any way to convert the base dataset as below?. Each elements in a column should be converted to a new row.
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]| null |[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|```
Any leads or help is appreciated! Thanks in Advance.
You have to write one user defined function like below. which will loop your data and get Max Time Value.
Note: UDF is just for reference, you can change it as per requirement
How to Iterate through each column in each row and get the highest TIME-VALUE pair?
scala> import org.apache.spark.sql.expressions.{UserDefinedFunction}
scala> def MaxTime:UserDefinedFunction = udf((json:String) => {
val pars = JSON.parseFull(json)
var output=""
pars.foreach{ x => val y = x.asInstanceOf[List[Any]]
var i = 1
var TimeMap = scala.collection.mutable.Map[String, Long]()
var ValueMap = scala.collection.mutable.Map[String, Double]()
y.foreach{ zz => val z = zz.asInstanceOf[Map[String,Double]]
TimeMap(i.toString) = z("TIME").toLong
ValueMap(i.toString) = z("VALUE")
i = i + 1
}
output = """[{"TIME" : """ + TimeMap.maxBy(_._2)._2.toString + """ ,"VALUE": """ + ValueMap(TimeMap.maxBy(_._2)._1) + """}]"""
}
output})
scala> SIGNALSFiltered.withColumn("SIG1", MaxTime(col("SIG1")).withColumn("SIG2", MaxTime(col("SIG2")))).withColumn("SIG3", MaxTime(col("SIG3"))).withColumn("SIG4", MaxTime(col("SIG4"))).show(false)
After getting highest in each columns (SIG1,....SIG4) have to update only the value of TIME in all columns with highest among them.
Write same UDF like above and pass complete row as a parameter. Then parse each column value into Map and get Maximum among all columns.

System.IndexOutOfRangeException when trying to save results from a stored proecedure

I have a SQL Server stored procedures with this output:
unit_id on_date notes type_code type_order status (No column name) (No column name)
3 2016-12-08 00:00:00.000 AVL -1 D NULL 16
3 2016-12-08 00:00:00.000 RSU 1 D 3 2
3 2016-12-08 00:00:00.000 TOW 2 D 6 5
.......etc
What I am trying to do it get these rows a columns to I can display them in a grid (spreadsheet like) view, and use them as variables in a bar graph.
I've tried the code (in my controller)
var model = new List<ResultsModel>();
SqlCommand command3 = new SqlCommand("dbo.pr_name");
command3.Parameters.Add(new SqlParameter { ParameterName = "#from", SqlDbType = System.Data.SqlDbType.DateTime, Value = NowDate });
command3.Parameters.Add(new SqlParameter { ParameterName = "#to", SqlDbType = System.Data.SqlDbType.DateTime, Value = "2017-09-21 00:00:00" });
command3.Parameters.Add(new SqlParameter { ParameterName = "#method", SqlDbType = System.Data.SqlDbType.Int, Value = 3 });
command3.CommandType = System.Data.CommandType.StoredProcedure;
using (var SPOutput3 = command.ExecuteReader())
{
model.Add(new ResultsModel()
{
unit_id = (Int32)SPOutput3["unit_id"],
on_date = (DateTimeOffset)SPOutput3["on_date"],
notes = SPOutput3["notes"].ToString(),
type_code = (string)SPOutput3["type_code"]
// other properties
});
return View(model);
}
and in my view
#*#foreach (var item in Model)
{
<tr>
<td>#item.on_date</td>
<td>#item.type_code</td>
</tr>
}*#
The code breaks at the line:
unit_id = (Int32)SPOutput3["unit_id"],
with an error System.IndexOutOfRangeException.
If I comment out that line, the error moves onto the next one etc.
The advise was after is: is the error telling me that there is no columns called unit_id in the output received? even thought the output from the SSMS shows it?
.. and what I can do to fix this?
Also....if the column has no name, how can I assign it ..like unit_id, on_date etc ?
Thanks
Well, first off, you need to call Read in a while loop:
using (var SPOutput3 = command.ExecuteReader())
{
while (SPOutput3.Read())
{
...
}
}
Then, inside the while loop, you're dealing with an individual row. So you can do:
while (SPOutput3.Read())
{
var unit_id = SPOutput3["unit_id"] as int?;
}
You want to use as rather than a direct cast here so you can stave off potential issues if bad data is returned or the type isn't what you think it is. If you need a non-nullable value, then you can simply use the null coalesce operator to provide a default:
SPOutput3["unit_id"] as int? ?? 0;

How can I generate schema from text file? (Hadoop-Pig)

Somehow i got filename.log which looks like for example (tab separated)
Name:Peter Age:18
Name:Tom Age:25
Name:Jason Age:35
because the value of key column may differ i cannot define schema when i load text like
a = load 'filename.log' as (Name:chararray,Age:int);
Neither do i want to call column by position like
b = foreach a generate $0,$1;
What I want to do is, from only that filename.log, to make it possible to call each value by key, for example
a = load 'filename.log' using PigStorage('\t');
b = group b by Name;
c = foreach b generate group, COUNT(b);
dump c;
for that purpose, i wrote some Java UDF which seperate key:value and get value for every field in tuple as below
public class SPLITALLGETCOL2 extends EvalFunc<Tuple>{
#Override
public Tuple exec(Tuple input){
TupleFactory mTupleFactory = TupleFactory.getInstance();
ArrayList<String> mProtoTuple = new ArrayList<String>();
Tuple output;
String target=input.toString().substring(1, input.toString().length()-1);
String[] tokenized=target.split(",");
try{
for(int i=0;i<tokenized.length;i++){
mProtoTuple.add(tokenized[i].split(":")[1]);
}
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}catch(Exception e){
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}
}
}
How should I alter this method to get what I want? or How should I write other UDF to get there?
Whatever you do, don't use a tuple to store the output. Tuples are intended to store a fixed number of fields, where you know what every field contains. Since you don't know that the keys will be in Name,Age form (or even exist, or that there won't be more) you should use a bag. Bags are unordered sets of tuples. They can have any number of tuples in them as long as the tuples have the same schema. These are all valid bags for the schema B: {T:(key:chararray, value:chararray)}:
{(Name,Foo),(Age,Bar)}
{(Age,25),(Name,Jim)}
{(Name,Bob)}
{(Age,30),(Name,Roger),(Hair Color,Brown)}
{(Hair Color,),(Name,Victor)} -- Note the Null value for Hair Color
However, it sounds like you really want a map:
myudf.py
#outputSchema('M:map[]')
def mapize(the_input):
out = {}
for kv in the_input.split(' '):
k, v = kv.split(':')
out[k] = v
return out
myscript.pig
register '../myudf.py' using jython as myudf ;
A = LOAD 'filename.log' AS (total:chararray) ;
B = FOREACH A GENERATE myudf.mapize(total) ;
-- Sample usage, grouping by the name key.
C = GROUP B BY M#'Name' ;
Using the # operator you can pull out all values from the map using the key you give. You can read more about maps here.