I need to make an XML file based on the SQL query that I run using powershell. I already know the schema for the XML that I need to create. The query results need to be looped through and I want to add each data value to specific XML node as per the schema.
I am able to run the query and get the results I need but I am having issues placing the data as per prescribed format.
Here's an example of how I am trying to accomplish this:
**Parsing the XMl Template
$XmlTemplate= [xml](get-content $xml) ($xml is the schema I have from the client)
***Parsing through XML Template and jumping to tag
$PlanIDXML= $XmlTemplate.NpiLink.PlanProvider.PlanID (to get to the node I need to enter data into)
**Parsing through XML Template and jumping to tag
$PlannameXML= $XmlTemplate.NpiLink.PlanProvider.PlanName (to get to the node I need to enter data into)
sample qry;
select PlanID,PlanName from plan
**Assuming I ran my query and saved the results as $qryresults***
foreach($result in $qryresults)
{
$PlanID=$result.PlanID
$PlanName=$result.PlanName
**Make Clone
$NewPlanIDXML=$PlanIDXML.Clone()
**Make Changes to the data
$NewPlanIDXML=$PlanID
***Append
$PlanIDXML.AppendChild($NewPlanIDXML)
* Do the same thing for Plan Name **
$PlanNameXML=$result.PlanName
}
$XmlTemplate.Save('filepath')
My concern is that I need to do this for each plan or planid that I get in my query results and I need to keep generating tags and tags even and append them to orginal nodes and save the schema.
So, if my query results have 10 Plan IDs it should continue to generate new Plan ID tags and Plan Name tags.
Its not letting me append (because system.string can not be converted to system.xml). I am really stuck and if you have a better approach on how to handle this, I am all ears.
Thanks much in advance!!!
You might be overengineering this a bit. If you have a template for the XML node, just treat it as a string, popping your values in at the appropriate place. Generate some array of these nodes as strings, then join them together and save to disk.
Let's say your template looks like this (type in some tokens yourself where generated values should go):
--Template.xml---
<Node attr="##VALUE1##">
<Param>##VALUE2##</Param>
</Node>
And you want to run some query to generate a bunch of these nodes, filling in VALUE1 and VALUE2. Then something like this could work:
$template = (gc .\Template.xml) -join "`r`n"
$val1Token = '##VALUE1##'
$val2Token = '##VALUE2##'
$nodes = foreach( $item in Run-Query)
{
# simple string replace
$result = $template
$result = $result.Replace($val1Token, $item.Value1)
$result = $result.Replace($val2Token, $item.Value2)
$result
}
# you have all nodes in a string array, just join them together along with parent node
$newXml = (#("<Nodes>") + $nodes + "</Nodes>") -join "`r`n"
$newXml | out-file .\Results.xml
Related
I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')
I have a script that reads a two column CSV I generate based on the current library of files I am accessing. I chomp the file into an array, then I upload the CSV element by element to an SQL table using the code below:
my $e_sth = $trans->prepare("delete from $table where 1 = 1");
$e_sth->execute();
my $u_sql = qq{
insert into $table (KTM, SUBSITE) values
(?, ?)
};
my $u_sth = $trans->prepare($u_sql);
foreach my $rec (#list) {
my ($klf, $subsite) = split(",", $rec);
# Upload
$u_sth->execute($klf, $subsite)
}
$trans->commit();
It is always the last line of the file, so I'm sure that will key some of you off right away to the issue. I am lost though. I assumed the foreach loop would prevent something like this from happening. The amount of characters from the last line being truncated varies as well. Both columns are set up as CHAR(100) with the KTM column being the primary key.
Any help or guidance is appreciated. Thanks!
EDIT: It looks like I might be reaching some sort of data limit for the table. I'll look into expanding the allocated size of the table.
I'm working on a personal project and very new (learning as I go) to JSON, NiFi, SQL, etc., so forgive any confusing language used here or a potentially really obvious solution. I can clarify as needed.
I need to take the JSON output from a website's API call and insert it into a table in my MariaDB local server that I've set up. The issue is that the JSON data is nested, and two of the key pieces of data that I need to insert are used as variable key objects rather than values, so I don't know how to extract it and put it in the database table. Essentially, I think I need to identify different pieces of the JSON expression and insert them as values, but I'm clueless how to do so.
I've played around with the EvaluateJSON, SplitJSON, and FlattenJSON processors in particular, but I can't make it work. All I can ever do is get the result of the whole expression, rather than each piece of it.
{"5381":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":74.0,"tm_def_snp":63.0,"temperature":58.0,"st_snp":8.0,"punts":4.0,"punt_yds":178.0,"punt_lng":55.0,"punt_in_20":1.0,"punt_avg":44.5,"humidity":47.0,"gp":1.0,"gms_active":1.0},
"1023":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":82.0,"tm_def_snp":56.0,"temperature":74.0,"off_snp":82.0,"humidity":66.0,"gs":1.0,"gp":1.0,"gms_active":1.0},
"5300":{"wind_speed":17.0,"tm_st_snp":27.0,"tm_off_snp":80.0,"tm_def_snp":64.0,"temperature":64.0,"st_snp":21.0,"pts_std":9.0,"pts_ppr":9.0,"pts_half_ppr":9.0,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl":4.0,"idp_sack":1.0,"idp_qb_hit":2.0,"humidity":100.0,"gp":1.0,"gms_active":1.0,"def_snp":23.0},
"608":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":53.0,"tm_def_snp":79.0,"temperature":88.0,"st_snp":4.0,"pts_std":5.5,"pts_ppr":5.5,"pts_half_ppr":5.5,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl_ast":1.0,"idp_tkl":5.0,"humidity":78.0,"gs":1.0,"gp":1.0,"gms_active":1.0,"def_snp":56.0},
"3396":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":60.0,"tm_def_snp":70.0,"temperature":63.0,"st_snp":19.0,"off_snp":13.0,"humidity":100.0,"gp":1.0,"gms_active":1.0}}
This is a snapshot of an output with a couple thousand lines. Each of the numeric keys that you see above (5381, 1023, 5300, etc) are player IDs for the following stats. I have a table set up with three columns: Player ID, Stat ID, and Stat Value. For example, I need that first snippet to be inserted into my table as such:
Player ID Stat ID Stat Value
5381 wind_speed 4.0
5381 tm_st_snp 26.0
5381 tm_off_snp 74.0
And so on, for each piece of data. But I don't know how to have NiFi select the right pieces of data to insert in the right columns.
I believe that it's possible to use jolt to transform your json into a format:
[
{"playerId":"5381", "statId":"wind_speed", "statValue": 0.123},
{"playerId":"5381", "statId":"tm_st_snp", "statValue": 0.456},
...
]
then use PutDatabaseRecord with json reader.
Another approach is to use ExecuteGroovyScript processor.
Add new parameter to it with name SQL.mydb and link it to your DBCP controller service
And use the following script as Script Body parameter:
import groovy.json.JsonSlurper
import groovy.json.JsonBuilder
def ff=session.get()
if(!ff)return
//read flow file content and parse it
def body = ff.read().withReader("UTF-8"){reader->
new JsonSlurper().parse(reader)
}
def results = []
//use defined sql connection to create a batch
SQL.mydb.withTransaction{
def cmd = 'insert into mytable(playerId, statId, statValue) values(?,?,?)'
results = SQL.mydb.withBatch(100, cmd){statement->
//run through all keys/subkeys in flow file body
body.each{pid,keys->
keys.each{k,v->
statement.addBatch(pid,k,v)
}
}
}
}
//write results as a new flow file content
ff.write("UTF-8"){writer->
new JsonBuilder(results).writeTo(writer)
}
//transfer to success
REL_SUCCESS << ff
{'aaa', 'bbb', 'ccc'}
....
Let's suppose above tupple gets loaded with following schema:
as (firstField:chararray, secondField:chararray, thirdField:chararray)
I want to store the tuple in HDFS with the path based on 2nd field (which is 'bbb' in above example). So the above tuple would get stored in the path
/SomeBaseDir/bbb/testoutput.txt
Any help would be appreciated.
To load the file use below command. Remember the file input data should be tab separated.If you are using any other separator like comma then change the parameter passes in PigStorage funtion. It should be PigStorage(',')
A = load '/home/abhishek/Work/pigInput/data' using PigStorage('\t') as (firstField:chararray, secondField:chararray, thirdField:chararray);
Now, to get the second element simple use:
result = foreach A generate secondField;
Result
dump result
('bbb')
You can store it using below command
store result into 'provide the path';
I think you want to use MultiStorage (https://pig.apache.org/docs/r0.8.1/api/org/apache/pig/piggybank/storage/MultiStorage.html). This should do pretty much what you want. Specify base path, then the field on which subdirectories should be based.
The SPLIT operator in PigLatin would do the job here.
For an input data (in) which is loaded, it could be split into different output variable as following:
loadedData = load ' ' as (.. ,somefield, ) using ... ;
SPLIT loadedData INTO
segmentA IF (somefield=='A'),
segmentB IF (somefield=='B'),
OtherSources OTHERWISE;
store segmentA into 'hdfs://<path for data segmentA >' using ....;
store segmentB into 'hdfs://<path for data segmentB >' using ....;
I am trying to retrieve data from a simple mySql table tbl_u_type which has just two columns, 'tid' and 'type'.
I want to use a direct SQL query instead of the Model logic. I used:
$command = Yii::app()->db->createCommand();
$userArray = $command->select('type')->from('tbl_u_type')->queryAll();
return $userArray;
But in the dropdown list it automatically shows an index number along with the required entry. Is there any way I can avoid the index number?
To make an array of data usable in a dropdown, use the CHtml::listData() method. If I understand the question right, this should get you going. Something like this:
$command = Yii::app()->db->createCommand();
$userArray = $command->select('tid, type')->from('tbl_u_type')->queryAll();
echo CHtml::dropdownlist('my_dropdown','',CHtml::listData($userArray,'tid','type'));
You can also do this with the Model if you have one set up for the tbl_u_type table:
$users = UType::model()->findall();
echo CHtml::dropdownlist('my_dropdown','',CHtml::listData($users ,'tid','type'));
I hope that gets you on the right track. I didn't test my code here, as usual, so watch out for that. ;) Good luck!