How to get specific row in Aerospike from using aql - aerospike

When I query in Aerospike using following it works:
aql> select * from connekt.inapp
as seen below:
However to find an specific entry I am trying following query, but it does not work:
aql> select * from connekt.inapp where DIGEST = "viwZnPMMutuTZkPBV/PPL6hmWW0="
Error: (2) AEROSPIKE_ERR_RECORD_NOT_FOUND
How to get a specific row from Aerospike using aql.

The digest you are seeing "AAAA....=" that was a cosmetic bug in AQL - I believe it was fixed couple of months ago, not sure which version of TOOLS release has it. [Bug - TOOLS-746]
It is rather moot because you already know the digest - you used it in the where = "viwZn...."
BTW, do
$aql
aql>HELP
for info on list of commonly used AQL commands. (Where digest= and edigest = are rarely useful in production. AQL is best used for exploring data, creating and managing Secondary Indexes, developing UDFs and Security management)

After doing some research and going through docs, I realised that in my case, digest is in Base64, format, so I have to query using EDIGEST like following:
aql> select * from connekt.inapp where EDIGEST = "viwZnPMMutuTZkPBV/PPL6hmWW0="
From docs:
When providing the HEX representation of the digest (for example from
the server logs), use DIGEST :
SELECT * FROM [.] WHERE DIGEST='DIGEST_HEX_STRING'
When
providing the Base64 representation of the digest (for example from
asbackup file), use EDIGEST :
SELECT * FROM [.] WHERE EDIGEST=DIGEST_B64_STRING
However when querying like this, in the result digest is AAAAAAAAAAAAAAAAAAAAAAAAAAA=, which I am not sure why is the case.

Related

Way around the bcp/freebcp queryout Long Query String error

It's an old bug fixed in Microsoft's Q279180, that can no longer be viewed. In fact, you get that sardonic response from them.
We would like to show you a description here but the site won’t allow us.
1 - 1
https://support.microsoft.com/en-US/search/results?query=Q279180
I'm working with a very restricted read-only sources (SQL Server DBs) and no root access to mod client options on Unix; so my idea was to use the freebcp from FreeTds in the queryout mode to submit a joined sql for an extract. And of course this won't allow anything above some puny 100 characters for a query text.
Wonder, if anyone has found a way around it.

Mulesoft not able to pass dynamic SQL queries based on environments

Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);

Zeppelin alternative for K-V store

Any alternative for checking key-value entries while debugging the Ignite application? Zeppelin can be able to do only SQL querying. Visor command -> modify -get -c=CName is very tedious to work on and also can't get entries by wildcard searching of keys. Or is there any way we can query the K-V store via SQL Queries?
You can use:
1)REST
https://apacheignite.readme.io/docs/rest-api#get-and-remove
2)Create the thick JAVA, .NET, C++ clients that will use native cache API
3)Node JS client:
https://github.com/apache/ignite/blob/master/modules/platforms/nodejs/examples/CachePutGetExample.js
4)Python client:
https://apacheignite.readme.io/docs/python-thin-client-key-value
5)PHP client:
https://apacheignite.readme.io/docs/php-thin-client-key-value
Probably I missed some integrations.
Also as I know Zeppelin supports cacheAPI using Scala syntax:
https://zeppelin.apache.org/docs/0.8.0/interpreter/ignite.html
val cache: IgniteCache[AffinityUuid, String] = ignite.cache("words")
And the final way. You can add query entity to your cache and run the queries like next:
select _key, _val from table;

BigQuery DATE_DIFF Error: Encountered " <STRING_LITERAL>

I'm trying the following query from the BigQuery Standard SQL documentation:
SELECT DATE_DIFF(DATE '2010-07-07', DATE '2008-12-25', DAY) as days_diff;
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#date_diff
However, I'm receiving the following error from the UI:
Error: Encountered " "\'2010-07-07\' "" at line 1, column 23. Was expecting: ")" ... [Try using standard SQL (https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql)]
This is a simple copy and paste from the doc into the web UI Query Editor.
Any idea on how to resolve this?
Below are examples for respectively BigQuery Legacy SQL and Standard SQL
Make sure you try code as it is in answer below - not just second lines but 2(both) lines including first line that looks like comment - but in reality important part of query as it controls which SQL dialect will be in effect!
#legacySQL
SELECT DATEDIFF(DATE('2010-07-07'), DATE('2008-12-25')) AS days_diff
and
#standardSQL
SELECT DATE_DIFF(DATE '2010-07-07', DATE '2008-12-25', DAY) AS days_diff
both returns result as below
Row days_diff
1 559
Ideally, you should consider migrating to Standard SQL
Although the answer has already been provided in the comments to your questions and by Mikhail in the other answer, let me share with you a complete answer that hopefully addresses all your doubts:
ERROR MESSAGE
As explained in the error message you are getting, [Try using standard SQL (...)]. You are trying to run this sample using Legacy SQL (which instead would use the DATEDIFF function). You are actually right, you are running the exact same query provided in the documentation, but the issue here is that the documentation you are using is for Standard SQL (the preferred query language in BigQuery), but you are instead using Legacy SQL (the default language in the old UI, the one you are using).
CHANGE THE QUERY LANGUAGE IN USE
First of all, I would like to remark the importance of using Standard SQL instead of Legacy SQL, as the former adds new functionalities and is the current recommended language to use with BigQuery. You can see the whole list of comparisons in the documentation, but if you are starting with BigQuery, I would just go straight away with Standard SQL.
Now, that being clarified, in order to use Standard SQL instead of Legacy SQL, you can have a look at the documentation here, but let me summarize the available options for you:
In the BigQuery UI, you can toggle the Use legacy SQL option inside
the Show options menu. If this option is marked, you will be using
Legacy SQL; and if it is not, you will be using Standard SQL.
You can use a prefix in your query, like #standardSQL or #legacySQL, which would ignore the default configuration and use the language you specify with this option. As an example on how to use it, please have a look at the other answer by Mikhail, who shared with you a couple of examples using prefixes to identify the language in use. You should copy the complete query (including the prefix) in the UI, and you will see that it works successfully.
Finally, as suggested by Elliott, you can use the new UI, which has just recently released in Beta access. You can access it through this link https://console.cloud.google.com/bigquery instead of the old link https://bigquery.cloud.google.com that you were using until now. You can find more information about the new BigQuery Web UI in this other linked page too.

MySql NetScaler DataStream Content Switching failing to detect select

We are using the new DataStream feature introduced in NetScaler 9 (we're on v10) to do content switching (described here: http://support.citrix.com/proddocs/topic/netscaler/ns-dbproxy-wrapper-con.html). We have a read-only virtual server that balances across several read-only MySql slaves. We use our Content Switching to send all "Selects" over to the read-only server.
the policy is configured as such:
mysql.req.query.command.contains("select")
our users send multi-part queries to our database server. Most often they are simple, like:
use database;
select col1 from table1;
Sometimes they will put comments at the head of the query. for example:
-- this is my query
select col1 from table1;
What we've found is that if the query simply starts with a select, everything works swimmingly. However, in the cases where there is a use statement or comments preceding the query, the content swticher fails to detect that this is a select query and it bypasses our read-only virtual server.
I am about to tell all of our developers that they must fully alias every table in every query and avoid use statements (yes, this is a good thing anyway), and also that they cannot use comments in their sql (that's just silly).
Does anyone know how I can configure my NetScaler DataStream Content Switching to ignore comments and use statements?
The decision on where to send the query is done on the first line received after successful authentication... so ignoring the comment won't work.
You could setup a responder policy which sends back an error message saying "Please don't use SQL Comments in commands sent to the Load Balanced VIP". A bit draconian, but your devs would get the message fairly quick.. but there's no way to ignore the comment, but still base a decision on the select statement. However, I was under the impression that the select statement is up to the first semi colon... so in your example above, it should (in theory) still find the select statement. I'd need to test that to be certain of the behaviour however.
Also - the USE statement is critical. This is the DB on which all subsequent commands are issued.
It would be best practice to NOT use the USE statement, but instead, change the select statement to:
select col1 from database.table1;
Once the USE statement is seen, it prevents any subsequent commands being pipelined down the same connection... So if there are a lot of Use statements, you will not get to enjoy the connection multiplexing functionality that comes with DataStream.
We learned that Block Level comments are acceptable, but single line comments are not.
This is properly ignored:
/* my comment */
These comment styles are treated as part of the query:
-- my comment
# my comment
kind of ridiculous when having SET autocommit=0 is perfectly reasonable. What about in that situation.