I am using Thingworx Platform for IoT. I have connected Thingworx and SQL. I have created 2 database SQL services of the type query and command. Also I have created two tables named Temperature and Humidity.
I am getting Temperature and Humidity values in Thingworx platform. But I am unable to send it to the database, can anyone help? How can I call the properties in the command service?
Database.Conf sql Command code
insert into INFO(Temperature)
values ([[]]);
Thing-Test Subscription Code
var params={Temp:me.Temp_Prop,Hum:me.Hum_Prop};
var result=Things["DatabaseConf"].InsertRecords(params);
When you create a service in ThingWorx you can select the type: "query" is a valid option when you have to insert or retrieve values from a database. You can test the query on SQl and copy/paste into the service.
Check if the thing that connects you to the DataBase has the property "isConnected" equal to True.
Maybe you should also think about how to trigger the Insert service, you can use ValueChange triggers or a Periodic Trigger.
You need an additionnal service of type javascript where you can retrieve the properties, using me.property and invoke the sql services. You can add input parameters to the sql services and use them like this: [[inputParameter]]. In your example that should look like:
insert into INFO(Temperature)
values ([[temp]],[[Hum]]);
If you use the arrow on the right of your input parameter, ThingWorx will write it for you in the proper way already.
Related
At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.
I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary
In SQL Server (2016) we have the SESSION_CONTEXT() and sp_set_session_context to retrieve/store custom variables in a key-value store. These values are available only in the session and their lifetime ends when the session is terminated. (Or in earlier versions the good old CONTEXT_INFO to store some data in a varbinary).
I am looking for a similar solution in EXASol (6.0).
An obvious one would be to create a table and store this info there, however this requires scheduled cleanup script and more error prone than a built-in solution. This is the fallback plan, however I'd like to be sure that there is no other options.
Another option could be to create individual users in the database and configure them, but just because of the amount of users to be added, this was ruled out.
The use-case is the following: An application has several users, each user have some values to be used in each queries. The application have access only to some views.
This works wonderfully in SQL Server, but we want to test EXASol as an alternative with the same functionality.
I cannot find anything related in the EXASol Manual but it is possible, that I just missed something.
Here is a simplified sample code in SQL Server 2016
sp_set_session_context #key='filter', #value='asd', #read_only=1;
CREATE VIEW FilteredMyTable AS
SELECT Col1, Col2, Col3 FROM MyTable
WHERE MyFilterCol = CONVERT(VARCHAR(32), SESSION_CONTEXT('filter'))
I've tried an obviously no-go solution, just to test if it works (it does not).
ALTER SESSION SET X_MY_CUSTOM_FILTER = "asd"
You cannot really set a session parameter in EXASOL, the only way to achieve something similar is to store the values that you need in a table with a structure like:
SESSION_ID KEY VALUE READ_ONLY
8347387 filter asd 1
With LUA you could create a script that will make easier for you to manage these "session" variables.
I think you can achieve what you need by using the scripting capabilities within Exasol - see section 3.5 in the user manual..
You could also handle the parameterisation 'externally' via a shell script
I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
DELETE * FROM #tablename
and...
DELETE * FROM :tablename
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it doesn't use parameters)
With such data access libraries, like UniDAC or FireDAC, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
Second - where (typically) does the parameter replacement happen?
It doesn't. That's the whole point. Data elements in your query stay data items. Code elements stay code elements. The two never intersect, and thus there is never an opportunity for malicious data to be treated as code.
connect via ODBC to an Excel spreadsheet... I'm trying to understand why a particular command works only when it doesn't use parameters
Excel isn't really a database engine, but if it were, you still can't use a parameter for the name a table.
SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses #parname for parameters. Oracle uses :parname. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
command.CommandText = string.Format("DELETE FROM {0}", tableName);
P.S. A * is not allowed after a DELETE. After all, you can only delete whole rows, not a set of columns.
hallo,
How can i transfer the table Command via WCF ?
i have a idee:
in client side :
using normal SQL syntax: SELECT * FROM COMMAND then wrap it the result into List or IeNumerable or another type of collection and then use WCF List GetCommand ()
and in server side :
call WCF List GetCommand () then loop the collection use INSERT INTO COMMAND...into table Command
is that a good idea ? if not could you give me a hints ?
Thanx you in advance,
Stev
PS: i just want to transfer specific data:
SELECT * FROM COMMAND WHERE REGION_CLIENT = 345 (not all database)
If what you are tring to do is to transfer data from a local client database to the server, you should use the microsoft sync framework, see: http://msdn.microsoft.com/en-us/sync/bb736753