Good Morning, We are trying to find a way to do a Bulk update of a couple thousand documents in RavenDB. This is the first time we have done this as the DB is provided by a third party and they are not sure either on how to do this.
Essentially we have 1000's of records that look like this one below:
{
"VPOId": 8,
"Description": "VPO 8",
"AreaId": "93",
"AreaDisplay": "Area",
"Address": "Address",
"JobId": "109201005111",
"JobDisplay": "Address",
"TradeId": "19",
"TradeDisplay": "Finishing",
"VarianceId": "V70",
"VarianceDisplay": "V70 - Trade Change",
"SupplierId": "104095",
"SupplierDisplay": "Vendor Name",
"SupplierBackChargeId": null,
"SupplierBackChargeDisplay": null,
"IssuedDate": "2017-08-14T00:00:00.0000000",
"SearchTerms": " 109201005111 ",
"AccountId": "d740eb47-137d-e711-80d4-00505681128f",
"Active": true,
"DivisionDisplay": "010"
we need to mass update the Search Terms and the JobID Perferable from an our EXCEL spreadsheet mapping document. I have tried the export and import csv but that just seems to create new records vs updating the old ones or do we have a way that we can do that.
......
If your input is CSV, you'll need to write a script that does this.
You can do that in Python or C#.
Related
I have a problem with Oracle ORDS escaping my GeoJSON with "
{
"id": 1,
"city": "New York",
"state_abrv": "NY",
"location": "{\"type\":\"Point\",\"coordinates\":[-73.943849, 40.6698]}"
}
In Oracle DB it is stated correctly:
{"type":"Point","coordinates":[-73.943849, 40.6698]}
Need help to figure out why the " are added and how to prevent this from happening
add this column alias to your restful service handler query for the JSON column
SELECT id,
jsons "{}jsons" --this one
FROM table_with_json
Then when ords sees the data for the column, it won't format it as JSON because it already IS json
You can use whatever you want, in your case it should probably be
"{}location"
I am pretty new to postregSQL and not too familiar with SQL yet. But im trying to learn.
In my database i want to store huge JSON files (~2mio lines, 40mb) and later query them as fast as possible. Right now it is to slow, so i figured indexing should do the trick.
The Problem is i do not know how to index the file since it is a bit tricky. I am woking on it the whole day now and starting to get desperate..
My DB is calles "replays" the json column "replay_files"
So my files look like this:
"replay": [
{
"data": {
"posX": 182,
"posY": 176,
"hero_name": "CDOTA_Unit_Hero_EarthSpirit"
},
"tick": 2252,
"type": "entity"
},
{
"data": {
"posX": 123,
"posY": 186,
"hero_name": "CDOTA_Unit_Hero_Puck"
},
"tick": 2252,
"type": "entity"
}, ...alot more lines... ]}
I tried to get all the entries with say heron_name: Puck
So i tried this:
SELECT * FROM replays r, json_array_elements(r.replay_file#>'{replay}') obj WHERE obj->'data'->>'hero_name' = 'CDOTA_Unit_Hero_Puck';
Which is working but for smaller files.
So i want to index like that:
CREATE INDEX hero_name_index ON
replays ((json_array_elements(r.replay_file#>'{replay}')->'data'->'hero_name);
BUt it doesn work. I have no idea how to reach that deep into the file and get to index this stuff.
I hope you understand my problem since my english isnt the best and can help me out here. I just dont know what else to try out.
Kind regards and thanks alot in advance
Peter
need to add descriptions to each column of a BigQuery table, seems I can do it manually, how to do it programmatically?
BigQuery now supports ALTER COLUMN SET OPTIONS statement, which can be used to update the description of a column
example:
ALTER TABLE mydataset.mytable
ALTER COLUMN price
SET OPTIONS (
description="Price per unit"
)
Documentation:
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#alter_column_set_options_statement
As Adam mentioned, you can use the table PATCH method on the API to update the schema columns. The other method is to use bq.
You can first get the schema by doing the following:
1: Get the JSON schema:
TABLE=publicdata:samples.shakespeare
bq show --format=prettyjson ${TABLE} > table.txt
Then copy the schema from table.txt to schema.txt ... it will look something like:
[
{
"description": "A single unique word (where whitespace is the delimiter) extracted from a corpus.",
"mode": "REQUIRED",
"name": "word",
"type": "STRING"
},
{
"description": "The number of times this word appears in this corpus.",
"mode": "REQUIRED",
"name": "word_count",
"type": "INTEGER"
},
....
]
2: Set the description field to whatever you want (if it is not there, add it).
3: Tell BigQuery to update the schema with the added columns. Note that schema.txt must contain the complete schema.
bq update --schema schema.txt -t ${TABLE}
You can use the REST API to create or update a table, and specify a field desciption (schema.fields[].description) in your schema.
https://cloud.google.com/bigquery/docs/reference/v2/tables#methods
I'm trying to insert multiple documents using MongoVUE by passing an array of documents in the Insert Document window. For example:
[ {"name": "Kiran", age: 20}, {"name": "John", "age": 31} ]
However, I kept getting the following error:
ReadStartDocument can only be called when CurrentBsonType is Document, not when CurrentBsonType is Array
Does anyone know how to do bulk insert in MongoVUE?
Thanks!
In case anyone else stumbles on this question, the answer is that the "Import Multiple Documents" functionality in MongoVue doesn't accept an array of objects like you would expect it to. Instead, it expects the document to be formatted as a simple series of documents.
For the above example, you could create a simple file called "import.json" and format the data like this and it will import fine:
{"name": "Kiran", age: 20}
{"name": "John", "age": 31}
I have a file.csv file with over 180,000 lines in it. I need to pick out only about 8 lines from it. Each of these lines has go the same id so this is what the file would look like:
"id", "name", "subid"
"1", "Entry no 1", "4234"
"1", "Entry no 2", "5233"
"1", "Entry no 3", "2523"
. . .
"1", "Entry no 8", "2322"
"2", "Entry no 1", "2344"
Is there a way for me to pick out just all the data with the id 1 or another numbers without indexing the whole file into a database (Either SQLITE or Core Data) since this would cause major performance issues for the app to have to index 180,0000 records. This is all for the iPhone and is on ios 5.
Thanks for the help.
Just parse the CSV and store the values in local variable. For parsing CSV via Objective-C checkout following tutorial(s):
http://www.macresearch.org/cocoa-scientists-part-xxvi-parsing-csv-data
http://cocoawithlove.com/2009/11/writing-parser-using-nsscanner-csv.html
Kind regards,
Bo
I would strongly recommend putting that in Core Data, sure it will be indexed but that is actually a good thing since your lookups will be wayyy faster, parsing that document every time is going to be way more demanding than looking it up in Core Data, the overhead is a small price to pay.
Sounds like a good job for Dave DeLong's CHCSVParser.
It works a bit like NSXMLParser, so you can just skip all the lines you don't want, and keep the 8 lines you do want.