Possible to use angular-datatables with serverside array sourced data instead of object sourced data - datatables

I'm trying to use angular-datatables with serverside processing. However, it seems that angular-datatables expects that the data from the server is in object format (object vs array data described) with column names preceding each table datapoint. I'd like to configure angular-datatables to accept array based data since I can't modify my server side output which only outputs data in array format.
I'm configuring Datatables in my javascript like so:
var vm = this;
vm.dtOptions = DTOptionsBuilder.newOptions()
.withOption('ajax', {
url: 'output/ss_results/' + $routeParams.uuid,
type: 'GET'
})
.withDataProp('data')
.withOption('processing', true)
.withOption('serverSide', true);
My data from the server looks like this in array format:
var data = [
[
"Tiger Nixon",
"System Architect",
"$3,120"
],
[
"Garrett Winters",
"Director",
"$5,300"
]
]
But as far as I can tell, angular-datatables is expecting the data in object format like so:
[
{
"name": "Tiger Nixon",
"position": "System Architect",
"extn": "5421"
},
{
"name": "Garrett Winters",
"position": "Director",
"extn": "8422"
}
]
I tried not defining dtColumns or setting it to an empty array like vm.dtColumns = []; but I get an error message when I do that. When I configure dtColumns with a promise to load the column data via ajax I get datatables error #4 because it can't find the column name preceding my table datapoints in the data retrieved from the server.
Is it possible to configure angular-datatables to accept array based data? I can't find anything on the angular-datatables website that indicates it can be configured this way.
Edit: So I removed the .withDataProp('data') which I think was causing the problem. The table works a little better now but it's still broken. After it loads, I get the message No matching records found. Even though right below it it says Showing 1 to 10 of 60,349 entries
Previous1…456…6035Next Does anyone know why this might be?

If you want to use an array of arrays instead of an array of objects, simply refer to the array indexes instead of the object names :
$scope.dtColumns = [
DTColumnBuilder.newColumn(0).withTitle('Name'),
DTColumnBuilder.newColumn(1).withTitle('Position'),
DTColumnBuilder.newColumn(2).withTitle('Office'),
DTColumnBuilder.newColumn(3).withTitle('Start date'),
DTColumnBuilder.newColumn(4).withTitle('Salary')
]
demo using the famous "Tiger Nixon" array loaded via AJAX -> http://plnkr.co/edit/16UoRqF5hvg2YpvAP8J3?p=preview

Related

Proper way to convert Data type of a field in MongoDB

Possible Replication of How to change the type of a field?
I am currently newly learning MongoDB and I am facing problem while converting Data type of field value to another data type.
Below is an example of my document
[
{
"Name of Restaurant": "Briyani Center",
"Address": " 336 & 338, Main Road",
"Location": "XYZQWE",
"PriceFor2": "500.0",
"Dining Rating": "4.3",
"Dining Rating Count": "1500",
},
{
"Name of Restaurant": "Veggie Conner",
"Address": " New 14, Old 11/3Q, Railway Station Road",
"Location": "ABCDEF",
"PriceFor2": "1000.0",
"Dining Rating": "4.4",
}]
Like above I have 12k documents. Notice the datatype of PriceFor2 is a string. I would like to convert the data type to Integer data type.
I have referred many amazing answers given in the above link. But when I try to run the query, I get .save() is not a function error. Please advice what is the problem.
Below is the code I used
db.chennaiData.find().forEach( function(x){ x.priceFor2= new NumberInt(x.priceFor2);
db.chennaiData.save(x);
db.chennaiData.save(x);});
This is the error I am getting..
TypeError: db.chennaiData.save is not a function
From MongoDB's save documentation:
Starting in MongoDB 4.2, the
db.collection.save()
method is deprecated. Use db.collection.insertOne() or db.collection.replaceOne() instead.
Likely you are having a MongoDB with version 4.2+, so the save function is no longer available. Consider migrate to the usage of insertOne and replaceOne as suggested.
For your specific scenario, it is actually preferred to do with a single update as mentioned in another SO answer. It only does one db call(while your approach fetches all documents in the collection to the application level) and performs n db call to save them back.
db.collection.update({},
[
{
$set: {
PriceFor2: {
$toDouble: "$PriceFor2"
}
}
}
],
{
multi: true
})
Mongo Playground

Google Docs API for creating invoice containing table of variable number of rows

I have a template file for my invoice with a table with sample row, but I want to add more rows dynamically based on a given array size, and write the cell values from the array...
Template's photo
I've been struggling for almost 3 days now.
Is there any easy way to accomplish that?
Here's the template file: Link to the Docs file(template)
And here's a few sample arrays of input data to be replaced in the Template file:
[
[
"Sample item 1s",
"Sample Quantity 1",
"Sample price 1",
"Sample total 1"
],
[
"Sample item 2",
"Sample Quantity 2",
"Sample price 2",
"Sample total 2"
],
[
"Sample item 3",
"Sample Quantity 3",
"Sample price 3",
"Sample total 3"
],
]
Now, the length of the parent array can vary depending on the number of items in the invoice, and that's the only problem that I'm struggling with.
And... Yeah, this is a duplicate question, I've found another question on the same topic, but looking at the answers and comments, everyone is commenting that they don't understand the question whereas it looks perfectly clear for me.
Google Docs Invoice template with dynamically items row from Google Sheets
I think the person who asked the question have already quit from it. :(
By the way I am using the API for PHP (Google API Client Library for PHP), and code for replacing dummy text a Google Docs Document by the actual data is given below:
public function replaceTexts(array $replacements, string $document_id) {
# code...
$req = new Docs\BatchUpdateDocumentRequest();
// var_dump($replacements);
// die();
foreach ($replacements as $replacement) {
$target = new Docs\SubstringMatchCriteria();
$target->text = "{{" . $replacement["targetText"] . "}}";
$target->setMatchCase(false);
$req->setRequests([
...$req->getRequests(),
new Docs\Request([
"replaceAllText" => [
"replaceText" => $replacement["newText"],
"containsText" => $target
]
]),
]);
}
return $this->docs_service->documents->batchUpdate(
$document_id,
$req
);
}
A possible solution would be the following
First prep the document by removing every row from the table apart from the title.
Get the full document tree from the Google Docs API.
This would be a simple call with the document id
$doc = $service->documents->get($documentId);
Traverse the document object returned to get to the table and then find the location of the right cell. This could be done by looping through the elements in the body object until one with the right table field is found. Note that this may not necessarily be the first one since in your template, the section with the {{CustomerName}} placeholder is also a table. So you may have to find a table that has the first cell with a text value of "Item".
Add a new row to the table. This is done by creating a request with the shape:
[
'insertTableRow' => [
'tableCellLocation' => [
'rowIndex' => 1,
'columnIndex' => 1,
'tableStartLocation' => [
'index' => 177
]
]
]
]
The tableStartLocation->index element is the paragraph index of the cell to be entered, i.e. body->content[i]->table->startIndex. Send the request.
Repeat steps 2 and 3 to get the updated $doc object, and then access the newly created cell i.e. body->content[i]->table->tableRows[j]->tableCells[k]->content->paragraph->elements[l]->startIndex.
Send a request to update the text content of the cell at the location of the startIndex from 5 above, i.e.
[
'insertText' => [
'location' => [
'index' => 206,
]
],
'text' => 'item_1'
]
]
Repeat step 5 but access the next cell. Note that after each update you need to fetch an updated version of the document object because the indexes change after inserts.
To be honest, this approach is pretty cumbersome, and it's probably more efficient to insert all the data into a spreadsheet and then embed the spreadsheet into your word document. Information on that can be found here How to insert an embedded sheet via Google Docs API?.
As a final note, I created a copy of your template and used the "Try this method" feature in the API documentation to validate my approach so some of the PHP syntax may be a bit off, but I hope you get the general idea.

How to convert Json to CSV and send it to big query or google cloud bucket

I`m new to nifi and I want to convert big amount of json data to csv format.
This is what I am doing at the moment but it is not the expected result.
These are the steps:
processes to create access_token and send request body using InvokeHTTP(This part works fine I wont name the processes since this is the expected result) and getting the response body in json.
Example of json response:
[
{
"results":[
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdaasdasdad",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdasda"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdasdasdasd",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdas"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
....etc....
]
}
]
Now I am using:
===>SplitJson ($[].results[])==>JoltTransformJSON with this spec:
[{
"operation": "shift",
"spec": {
"customer": {
"id": "customer_id"
},
"campaign": {
"id": "campaign_id",
"name": "campaign_name"
},
"adGroup": {
"id": "ad_group_id",
"name": "ad_group_name"
},
"metrics": {
"clicks": "clicks",
"costMicros": "cost",
"impressions": "impressions"
},
"segments": {
"device": "device",
"date": "date"
},
"incomeRangeView": {
"resourceName": "keywords_id"
}
}
}]
==>> MergeContent( here is the problem which I don`t know how to fix)
Merge Strategy: Defragment
Merge Format: Binary Concatnation
Attribute Strategy Keep Only Common Attributes
Maximum number of Bins 5 (I tried 10 same result)
Delimiter Strategy: Text
Header: [
Footer: ]
Demarcator: ,
What is the result I get?
I get a json file that has parts of the json data
Example: I have 50k customer_ids in 1 json file so I would like to send this data into big query table and have all the ids under the same field "customer_id".
The MergeContent uses the split json files and combines them but I will still get 10k customer_ids for each file i.e. I have 5 files and not 1 file with 50k customer_ids
After the MergeContent I use ==>>ConvertRecord with these settings:
Record Reader JsonTreeReader (Schema Access Strategy: InferSchema)
Record Writer CsvRecordWriter (
Schema Write Strategy: Do Not Write Schema
Schema Access Strategy: Inherit Record Schema
CSV Format: Microsoft Excel
Include Header Line: true
Character Set UTF-8
)
==>>UpdateAttribute (custom prop: filename: ${filename}.csv) ==>> PutGCSObject(and put the data into the google bucket (this step works fine- I am able to put files there))
With this approach I am UNABLE to send data to big query(After MergeContent I tried using PutBigQueryBatch and used this command in bq sheel to get the schema I need:
bq show --format=prettyjson some_data_set.some_table_in_that_data_set | jq '.schema.fields'
I filled all the fields as needed and Load file type: I tried NEWLINE_DELIMITED_JSON or CSV if I converted it to CSV (I am not getting errors but no data is uploaded into the table)
)
What am I doing wrong? I basically want to map the data in such a way that each fields data will be under the same field name
The trick you are missing is using Records.
Instead of using X>SplitJson>JoltTransformJson>Merge>Convert>X, try just X>JoltTransformRecord>X with a JSON Reader and a CSV Writer. This skips a lot of inefficiency.
If you really need to split (and you should avoid splitting and merging unless totally necessary), you can use MergeRecord instead - again with a JSON Reader and CSV Writer. This would make your flow X>Split>Jolt>MergeRecord>X.

GCP Dataflow JOB REST response add displayData object with { "key":"datasetName", ...}

Why this code of line doesn't generate displayData object with { "key":"datasetName", ...} and how I can generate it if it's not coming by default when using BigQuery source from apache beam?
bigqcollection = p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(project=project,query=get_java_query))
[UPDATE] Adding result that I try to produce:
"displayData": [
{
"key": "table",
"namespace": "....",
"strValue": "..."
},
{
"key": "datasetName",
"strValue": "..."
}
]
From reading the implementation of display_data() for a BigQuerySource in the most recent version of Beam, it does not extract the table and dataset from the query, which your example uses. And more significantly, it does not create any fields specifically named datasetName.
I would recommend writing a subclass of _BigQuerySource which adds the fields you need to the display data, while preserving all the other behavior.

Collapsing a group using Google Sheets API

So as a workaround to difficulties creating a new sheet with groups I am trying to create and collapse these groups in a separate call to batchUpdate. I can call request an addDimensionGroup successfully, but when I request updateDimensionGroup to collapse the group I just created, either in the same API call or in a separate one, I get this error:
{
"error": {
"code": 400,
"message": "Invalid requests[1].updateDimensionGroup: dimensionGroup.depth must be \u003e 0",
"status": "INVALID_ARGUMENT"
}
}
But I'm passing depth as 0 as seen by the following JSON which I send in my request:
{
"requests":[{
"addDimensionGroup":{
"range":{
"dimension":"ROWS",
"sheetId":0,
"startIndex":2,
"endIndex":5}
}
},{
"updateDimensionGroup":{
"dimensionGroup":{
"range": {
"dimension":"ROWS",
"sheetId":0,
"startIndex":2,
"endIndex":5
},
"depth":0,
"collapsed":true
},
"fields":"*"
}
}],
"includeSpreadsheetInResponse":true}',
...
I'm not entirely sure what I am supposed to provide for "fields", the documentation for UpdateDimensionGroupRequest says it is supposed to be a string ("string ( FieldMask format)"), but the FieldMask definition itself shows the possibility of multiple paths, and doesn't tell me how they are supposed to be separated in a single string.
What am I doing wrong here?
The error message is actually instructing you that the dimensionGroup.depth value must be > 0:
If you call spreadsheets.get() on your sheet, and request only the DimensionGroup data, you'll note that your created group is actually at depth 1:
GET https://sheets.googleapis.com/v4/spreadsheets/{SSID}?fields=sheets(rowGroups)&key={API_KEY}
This makes sense, since the depth is (per API spec):
depth numberThe depth of the group, representing how many groups have a range that wholly contains the range of this group.
Note that any given particular DimensionGroup "wholly contains its own range" by definition.
If your goal is to change the status of the DimensionGroup, then you need to set its collapsed property:
{
"requests":
[
{
"updateDimensionGroup":
{
"dimensionGroup":
{
"range":
{
"sheetId": <your sheet id>,
"dimension": "ROWS",
"startIndex": 2,
"endIndex": 5
},
"collapsed": true,
"depth": 1
},
"fields": "collapsed"
}
}
]
}
For this particular Request, the only attribute you can set is collapsed - the other properties are used to identify the desired DimensionGroup to manipulate. Thus, specifying fields: "*" is equivalent to fields: "collapsed". This is not true for the majority of requests, so specifying fields: "*" and then omitting a non-required request parameter is interpreted as "Delete that missing parameter from the server's representation".
To change a DimensionGroup's depth, you must add or remove other DimensionGroups that encompass it.