Checking if a table exists in BigQuery Java - google-bigquery

I'm trying to write a function to check whether a table exists or not in BigQuery. The following code always returns true. Where is the problem?
Thanks!
private static boolean checkTableExist() {
try {
BigQueryOptions.Builder optionsBuilder = BigQueryOptions.newBuilder();
BigQuery bigquery = optionsBuilder.build().getService();
bigquery.getTable(options.getBigQueryDatasetId(), options.getBigQueryTableId());
} catch (Exception e) {
return false;
}
return true;
}

I don't think you should rely on java Exception to test a boolean condition.
I haven't looked a lot at the getTable() method, but here is how I check if a table exists:
public boolean isExisting() {
return getDataset().get(tableName) != null;
}
protected Dataset getDataset() {
return bigQuery.getDataset(dataSetName);
}

Try this:
if (bigquery.getDataset(datasetName).get(tableName).exists()) {
// table exists
} else {
// table does not exist in BQ dataset
}

Related

Checking for record existence in column sql

I am trying to check if the user select (u.userId) is not in the column (urid) then only return true and run the other function. If the user selected data already exists, then return false. I get it with return void.. what happens? I'm still new in asp.net, hoping for some help. Thanks.
public string URID { get; set; }
public void urid_existence(User u)
{
DBHandler dbh = new DBHandler();
dbh.OpenConnection();
string sql = "select urid from FCS_COUGRP";
if (u.UserID != u.URID)
{
userH.changeUrserGroup(u);
return true;
}
else
{
return false;
}
}
void means that the method does not return anything, but you want to return a bool. So this is the correct signature:
public bool urid_existence(User u)
{
// ...
if (u.UserID != u.URID)
{
userH.changeUrserGroup(u);
return true;
}
else
{
return false;
}
}

trouble creating a pig udf schema

Trying to parse xml and I'm having trouble with my UDF returning a tuple. Following the example from http://verboselogging.com/2010/03/31/writing-user-defined-functions-for-pig
pig script
titles = FOREACH programs GENERATE (px.pig.udf.PARSE_KEYWORDS(program))
AS (root_id:chararray, keyword:chararray);
here is the output schema code:
override def outputSchema(input: Schema): Schema = {
try {
val s: Schema = new Schema
s.add(new Schema.FieldSchema("root_id", DataType.CHARARRAY))
s.add(new Schema.FieldSchema("keyword", DataType.CHARARRAY))
return s
}
catch {
case e: Exception => {
return null
}
}
}
I'm getting this error
pig script failed to validate: org.apache.pig.impl.logicalLayer.FrontendException:
ERROR 0: Given UDF returns an improper Schema.
Schema should only contain one field of a Tuple, Bag, or a single type.
Returns: {root_id: chararray,keyword: chararray}
Update Final Solution:
In java
public Schema outputSchema(Schema input) {
try {
Schema tupleSchema = new Schema();
tupleSchema.add(input.getField(1));
tupleSchema.add(input.getField(0));
return new Schema(new Schema.FieldSchema(getSchemaName(this.getClass().getName().toLowerCase(), input),tupleSchema, DataType.TUPLE));
} catch (Exception e) {
return null;
}
}
You will need to add your s schema instance variable to another Schema object.
Try returning a new Schema(new FieldSchema(..., input), s, DataType.TUPLE)); like in the template below:
Here is my answer in Java (fill out your variable names):
#Override
public Schema outputSchema(Schema input) {
Schema tupleSchema = new Schema();
try {
tupleSchema.add(new FieldSchema("root_id", DataType.CHARARRAY));
tupleSchema.add(new FieldSchema("keyword", DataType.CHARARRAY));
return new Schema(new FieldSchema(getSchemaName(this.getClass().getName().toLowerCase(), input), tupleSchema, DataType.TUPLE));
} catch (FrontendException e) {
e.printStackTrace();
return null;
}
}
Would you try:
titles = FOREACH programs GENERATE (px.pig.udf.PARSE_KEYWORDS(program));
If that doesn't error, then try:
titles = FOREACH TITLES GENERATE
$0 AS root_id
,$1 AS keyword
;
And tell me the error?

Passing custom parameters to a pig udf function in java

This is the way I am looking to process my data.. from pig..
A = Load 'data' ...
B = FOREACH A GENERATE my.udfs.extract(*);
or
B = FOREACH A GENERATE my.udfs.extract('flag');
So basically extract either has no arguments or takes an argument... 'flag'
On my udf side...
#Override
public DataBag exec(Tuple input) throws IOException {
//if flag == true
//do this
//else
// do that
}
Now how do i implement this in pig?
The preferred way is to use DEFINE.
,,Use DEFINE to specify a UDF function when:
...
The constructor for the
function takes string parameters. If you need to use different
constructor parameters for different calls to the function you will
need to create multiple defines – one for each parameter set"
E.g:
Given the following UDF:
public class Extract extends EvalFunc<String> {
private boolean flag;
public Extract(String flag) {
//Note that a boolean param cannot be passed from script/grunt
//therefore pass it as a string
this.flag = Boolean.valueOf(flag);
}
public Extract() {
}
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0) {
return null;
}
try {
if (flag) {
...
}
else {
...
}
}
catch (Exception e) {
throw new IOException("Caught exception processing input row ", e);
}
}
}
Then
define ex_arg my.udfs.Extract('true');
define ex my.udfs.Extract();
...
B = foreach A generate ex_arg(); --calls extract with flag set to true
C = foreach A generate ex(); --calls extract without any flag set
Another option (hack?) :
In this case the UDF gets instantiated with its noarg constructor and you pass the flag you want to evaluate in its exec method. Since this method takes a tuple as a parameter you need to first check whether the first field is the boolean flag.
public class Extract extends EvalFunc<String> {
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0) {
return null;
}
try {
boolean flag = false;
if (input.getType(0) == DataType.BOOLEAN) {
flag = (Boolean) input.get(0);
}
//process rest of the fields in the tuple
if (flag) {
...
}
else {
...
}
}
catch (Exception e) {
throw new IOException("Caught exception processing input row ", e);
}
}
}
Then
...
B = foreach A generate Extract2(true,*); --use flag
C = foreach A generate Extract2();
I'd rather stick to the first solution as this smells.

How to define 'geography' type using Npgsql and OrmLite (using postgresql, postgis, c#)

How do I define a postgis 'geography' type in my C# class model so that OrmLite can easily pass it through to Postgresql so I can run spatial queries in addition to saving spatial data to the 'geography' column?
The best library is NetTopologySuite for this case;
you can use like this;
protected GisSharpBlog.NetTopologySuite.Geometries.Geometry _geom;
public GisSharpBlog.NetTopologySuite.Geometries.Geometry Geom
{
get { return _geom; }
set { _geom = value; }
}
protected string _geomwkt;
public virtual string GeomWKT
{
get
{
if (this.Geom != null)
return this.Geom.ToText();
else
return "";
}
set
{
string wktString = value;
if (string.IsNullOrEmpty(wktString))
_geom = null;
else
{
var fact = new GeometryFactory();
var wktreader = new WKTReader(fact);
_geom = (Geometry)wktreader.Read(wktString);
}
}
}

SAP JCo RETURN Table empty when using TransactionID

I'm using the JCo Library to access SAP standard BAPI. Well everything is also working except that the RETURN Table is always empty when I use the TID (TransactionID).
When I just remove the TID, I get the RETURN table filled with Warnings etc. But unfortunately I need to use the TID for the transactional BAPI, otherwise the changes are not commited.
Why is the RETURN TABLE empty when using TID?
Or how must I commit changes to a transactional BAPI?
Here speudo-code of a BAPI access:
import com.sap.conn.jco.*;
import org.apache.commons.logging.*;
public class BapiSample {
private static final Log logger = LogFactory.getLog(BapiSample.class);
private static final String CLIENT = "400";
private static final String INSTITUTION = "1000";
protected JCoDestination destination;
public BapiSample() {
this.destination = getDestination("mySAPConfig.properties");
}
public void execute() {
String tid = null;
try {
tid = destination.createTID();
JCoFunction function = destination.getRepository().getFunction("BAPI_PATCASE_CHANGEOUTPATVISIT");
function.getImportParameterList().setValue("CLIENT", CLIENT);
function.getImportParameterList().setValue("INSTITUTION", INSTITUTION);
function.getImportParameterList().setValue("MOVEMNT_SEQNO", "0001");
// Here we will then all parameters of the BAPI....
// ...
// Now the execute
function.execute(destination, tid);
// And getting the RETURN Table. !!! THIS IS ALWAYS EMPTY!
JCoTable returnTable = function.getTableParameterList().getTable("RETURN");
int numRows = returnTable.getNumRows();
for (int i = 0; i < numRows; i++) {
returnTable.setRow(i);
logger.info("RETURN VALUE: " + returnTable.getString("MESSAGE"));
}
JCoFunction commit = destination.getRepository().getFunction("BAPI_TRANSACTION_COMMIT");
commit.execute(destination, tid);
destination.confirmTID(tid);
} catch (Throwable ex) {
try {
if (destination != null) {
JCoFunction rollback = destination.getRepository().getFunction("BAPI_TRANSACTION_ROLLBACK");
rollback.execute(destination, tid);
}
} catch (Throwable t1) {
}
}
}
protected static JCoDestination getDestination(String fileName) {
JCoDestination result = null;
try {
result = JCoDestinationManager.getDestination(fileName);
} catch (Exception ex) {
logger.error("Error during destination resolution", ex);
}
return result;
}
}
UPDATE 10.01.2013: I was finally able to get both, RETURN table filled and Inputs commited. Solution is to do just both, a commit without TID, get the RETURN table and then making again a commit with TID.
Very very strange, but maybe the correct usage of the JCo Commits. Can someone explain this to me?
I was able to get both, RETURN table filled and Inputs commited.
Solution is to do just both, a commit without TID, get the RETURN table and then making again a commit with TID.
You should not call execute method 2 times it will incremenmt sequence number
You should use begin and end method in JCoContext class.
If you call begin method at the beginning of the process, the data will be updated and message will be returned.
Here is the sample code.
JCoDestination destination = JCoDestinationManager.getDestination("");
try
{
JCoContext.begin(destination);
function.execute(destination)
function.execute(destination)
}
catch (AbapException ex)
{
...
}
catch (JCoException ex)
{
...
}
catch (Exception ex)
{
...
}
finally
{
JCoContext.end(destination);
}
you can reffer the further information from this URL.
http://www.finereporthelp.com/download/SAP/sapjco3_linux_32bit/javadoc/com/sap/conn/jco/JCoContext.html