Using Apache Velocity for template SQL - apache

I want to use Apache Velocity Template Engine to generate SQL query based on the input.
Any sample snippet to get started would be helpful.
JSONObject keysObject = new JSONObject();
keysObject.put("HistoryId", "1");
keysObject.put("TenantName", "Tesla");
Iterator<?> keys = keysObject.keys();
ArrayList list = new ArrayList();
Map map = new HashMap();
while( keys.hasNext() ) {
String key = (String)keys.next();
map.put(key, keysObject.get(key));
}
list.add( map );
int keyObjectSize = keysObject.length();
JSONObject can have more keys, but in this example i am using 2.
I want to use the keys historyId and tenantName to generate below SQL query, Where keys are used as the column name and keys size can be used to generate the value parameter(?1, ?2).
INSERT INTO "Alert" (historyid, tenantname) VALUES (?1, ?2)

Related

Not able to upload json data to Bigquery tables using c#

I am trying to upload json data to one of the table created under the dataset in Bigquery but fails with "Google.GoogleApiException: 'Google.Apis.Requests.RequestError
Not found: Table currency-342912:sampleDataset.currencyTable [404]"
Service account is created with roles BigQuery.Admin/DataEditor/DataOwner/DataViewer.
The roles are also applied to the table also.
Below is the snippet
public static void LoadTableGcsJson(string projectId = "currency-342912", string datasetId = "sampleDataset", string tableId= "currencyTable ")
{
//Read the Serviceaccount key json file
string dir = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.Parent.FullName + "\\" + "currency-342912-ae9b22f23a36.json";
GoogleCredential credential = GoogleCredential.FromFile(dir);
string toFileName = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.Parent.FullName + "\\" + "sample.json";
BigQueryClient client = BigQueryClient.Create(projectId,credential);
var dataset = client.GetDataset(datasetId);
using (FileStream stream = File.Open(toFileName, FileMode.Open))
{
// Create and run job
BigQueryJob loadJob = client.UploadJson(datasetId, tableId, null, stream); //This throws error
loadJob.PollUntilCompleted();
}
}
Permissions for the table, using the service account "sampleservicenew" from the screenshot
Any leads on this , much appreciated
Your issue might reside in your user credentials. Please follow this steps to check your code:
Please check if the user you are using to execute your application have access to the table you want to insert data.
If your json tags matchs your table columns.
If you json inputs are correct ( table name, dataset name ).
Use a dummy table to perform a quick test of your credentials and data integrity.
These steps will help you identifying what could be missing on your side. I perform the following operations to reproduce your case:
I created a table on BigQuery based on the values of your json data:
create or replace table `projectid.datasetid.tableid` (
IsFee BOOL,
BlockDateTime timestamp,
Address STRING,
BlockHeight INT64,
Type STRING,
Value INT64
);
Created a .json file with your test data
{"IsFee":false,"BlockDateTime":"2018-09-11T00:12:14Z","Address":"tz3UoffC7FG7zfpmvmjUmUeAaHvzdcUvAj6r","BlockHeight":98304,"Type":"OUT","Value":1}
{"IsFee":false,"BlockDateTime":"2018-09-11T00:12:14Z","Address":"tz2KuCcKSyMzs8wRJXzjqoHgojPkSUem8ZBS","BlockHeight":98304,"Type":"IN","Value":18}
Build & Run below code.
using System;
using Google.Cloud.BigQuery.V2;
using Google.Apis.Auth.OAuth2;
using System.IO;
namespace stackoverflow
{
class Program
{
static void Main(string[] args)
{
String projectid = "projectid";
String datasetid = "datasetid";
String tableid = "tableid";
String safilepath ="credentials.json";
var credentials = GoogleCredential.FromFile(safilepath);
BigQueryClient client = BigQueryClient.Create(projectid,credentials);
using (FileStream stream = File.Open("data.json", FileMode.Open))
{
BigQueryJob loadJob = client.UploadJson(datasetid, tableid, null, stream);
loadJob.PollUntilCompleted();
}
}
}
}
output
Row
IsFee
BlockDateTime
Address
BlockHeight
Type
value
1
false
2018-09-11 00:12:14 UTC
tz3UoffC7FG7zfpmvmjUmUeAaHvzdcUvAj6r
98304
OUT
1
2
false
2018-09-11 00:12:14 UTC
tz2KuCcKSyMzs8wRJXzjqoHgojPkSUem8ZBS
98304
IN
18
Note: You can use above code to perform your quick tests of your credentials and the integrity of the data to insert.
I also make use of the following documentation:
Load Credentials from a file
Google.Cloud.BigQuery.V2
Load Json data into a new table

Neo4j - How to formulate this query using executeCypher using params

I have a query like this
start n = node:node_auto_index('ids:"123", "456" ... ') return n
Here 123, 456 is a list of keys as a single param {list}. Now when I try to write this in Java
String q = " START n=node:node_auto_index('key:{ids}') return n "
Map<String, Object> map = new HashMap<String, Object>();
map.put("ids", keyList); // keyList is a list of strings
But somehow calling graphstoreclient.executeCypher(q, map) fails with parse error, can you point me to any documentation / correct syntax on this.
PS - This query works fine on console.
Since you're supplying a lucene query string, parameterize the entire string:
String q = " START n=node:node_auto_index({ids}) return n "
Map<String, Object> map = new HashMap<String, Object>();
map.put("ids", keyList);
keyList should now look like ids:"123", "456" ...

RallyRestAPI Create Build with ChangeSets

I am creating Rally Build Records as part of a TeamCity to Rally integration but havving issues associating a Build with a ChangeSet.
I find a set of related ChangeSets that match a particular criteria and have them in an array of String. I then create a JsonArray object, add these "_ref" strings as JsonPrimatives into the Array, add the array to my create Json object and add it to Rally.
However, what happens is that the build is created but the result has an empty Changeset array.
I have tried including the changesets in the createRequest and also doing an updateRequest but in both cases the response is SUCCESS, there are no errors or warnings reported and the Changeset array is returned as null and re-querying shows all other data as expected but the changeSet array is empty.
Here is the code.
JsonObject obj = new JsonObject();
obj.addProperty("Workspace", def.getWorkspace().getRef());
obj.addProperty("Duration",1.05);
obj.addProperty("Message", "Master 4683 Success");
obj.addProperty("Start", isoFormat.format(new Date()));
obj.addProperty("Status","SUCCESS");
obj.addProperty("Number","4683");
obj.addProperty("Uri", "http://");
obj.addProperty("BuildDefinition",def.getRef());
// changeSets is a ArrayList<String> of "_ref" strings of VALID changesets references.
if (changeSets != null && changeSets.size() > 0) {
JsonArray changeSetList = new JsonArray();
for (String id : changeSets) {
changeSetList.add(new JsonPrimitive(id));
}
obj.add("Changesets", changeSetList);
}
String ref = connector.Create("Build",obj);
connector.Delete(ref, null);
Any ideas?
My thought is that instead of populating your JsonArray with JsonPrimitive's having just the value of the ref, you actually need a JsonObject with a key/value pair of {"_ref", "/changeset/12345678910.js"}. I.E. make a change similar to the following:
// changeSets is a ArrayList<String> of "_ref" strings of VALID changesets references.
if (changeSets != null && changeSets.size() > 0) {
JsonArray changeSetList = new JsonArray();
for (String id : changeSets) {
JsonObject thisChangeset = new JsonObject();
thisChangeset.addProperty("_ref", id);
changeSetList.add(thisChangeset);
}
obj.add("Changesets", changeSetList);
}
And I believe your code should work as expected.

How to insert image in postgresql?

I am wondering how to insert an image on one of the fields in my postgresql table. I cannot find an appropriate tutorial re this matter. The dataype of the field is oid. Has anyone tried this? Thanks!
// All LargeObject API calls must be within a transaction
conn.setAutoCommit(false);
// Get the Large Object Manager to perform operations with
LargeObjectManager lobj = ((org.postgresql.PGConnection)conn).getLargeObjectAPI();
//create a new large object
int oid = lobj.create(LargeObjectManager.READ | LargeObjectManager.WRITE);
//open the large object for write
LargeObject obj = lobj.open(oid, LargeObjectManager.WRITE);
// Now open the file
File file = new File("myimage.gif");
FileInputStream fis = new FileInputStream(file);
// copy the data from the file to the large object
byte buf[] = new byte[2048];
int s, tl = 0;
while ((s = fis.read(buf, 0, 2048)) > 0)
{
obj.write(buf, 0, s);
tl += s;
}
// Close the large object
obj.close();
//Now insert the row into imagesLO
PreparedStatement ps = conn.prepareStatement("INSERT INTO imagesLO VALUES (?, ?)");
ps.setString(1, file.getName());
ps.setInt(2, oid);
ps.executeUpdate();
ps.close();
fis.close();
Found that sample code from here. Really very good bunch of sql operations.
To quote this site,
PostgreSQL database has a special data type to store binary data
called bytea. This is a non-standard data type. The standard data type
in databases is BLOB.
You need to write a client to read the image file, for example
File img = new File("woman.jpg");
fin = new FileInputStream(img);
con = DriverManager.getConnection(url, user, password);
pst = con.prepareStatement("INSERT INTO images(data) VALUES(?)");
pst.setBinaryStream(1, fin, (int) img.length());
pst.executeUpdate();
You can either use the bytea type or the large objects facility. However note that depending on your use case it might not be a good idea to put your images in the DB because of additional load it may put on the DB server.
Rereading your question I notice you mentioned you have a field of type oid. If this is an application you are modifying it suggests to me it is using large objects. These objects get an oid which you then need to store in another table to keep track of them.

How can I update document without losing fields?

CommonsHttpSolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
SolrInputDocument doc1 = new SolrInputDocument();
doc1.addField("id", "id1");
doc1.addField("name", "doc1");
doc1.addField("price", new Float(10));
SolrInputDocument doc2 = new SolrInputDocument();
doc2.addField("id", "id1");
doc2.addField("name", "doc2");
server.add(doc1);
server.add(doc2);
server.commit();
SolrQuery query = new SolrQuery();
query.setQuery("id:id1");
query.addSortField("price", SolrQuery.ORDER.desc);
QueryResponse rsp = server.query(query);
Iterator<SolrDocument> iter = rsp.getResults().iterator();
while(iter.hasNext()){
SolrDocument doc = iter.next();
Collection fieldNames = doc.getFieldNames();
Iterator<String> fieldIter = fieldNames.iterator();
StringBuffer content = new StringBuffer("");
while(fieldIter.hasNext()){
String field = fieldIter.next();
content.append(field+":"+doc.get(field)).append(" ");
//System.out.println(field);
}
System.out.println(content);
}
The question is that I want to get the result "id:id1 name:doc2 price:10.0", but the output is "id:id1 name:doc2"...
So I want to know if I want to get the result as "id:id1 name:doc2 price:10.0", how can I modify my programming?
As you are adding the documents with same id. You are basically adding a same document twice.
Solr will update/overwrite the document. updated is basically delete and add.
As the second document you added with the same id does not have the price field, it won't be added and you wont find it the index.
you would need to have all the fields changed and unchanged when you are adding back the document.
doc2.addField("price", new Float(10)); // should add it back to the document