Use client-request-properties with Kql magic - kql

I'm trying to run a kusto query in Jupyter using Kql magic version 0.1.114.post16. I would like to remove the 500k lines limit and I think that the notruncate option listed when running %kql --help "client-request-properties" should solve my problem, but I can't figure out how to insert it in kql magic. It doesn't work as other options or commands.

I kind of solved this, even though I'm not sure whether this is the correct way to use Kql magic. I just added set notruncation; at the top of my query, like so
%%kql
set notruncation;
...
I'd like to have the --help specify a bit better that these are not options of the Kqlmagic commands but rather have to be put inside the query.

Related

How to use # instead of $ in DB2 query with iBatis

I use DB2 with iBatis in my project.
There are many
FETCH FRIST $perPg$ ROWS ONLY
queries for paging in DaoMap.xml files. But it seems dangerous in case of query injection. so I want to change them using # instead of $, but I can't figure it out.
Functions like CAST(#perPg# AS INTEGER) doesn't work on FETCH query. How can I solve this problem?
If you want to convert to int (and ensure that the value arrived is int) you can try with this:
#perPg:BIGINT#

SQLite Ruby Gem not working with string placeholders

So, I have this code here:
database.execute("SELECT * FROM #{table} WHERE id=#{id}")
But every time I run it I get
unrecognized token: "]" (SQLite3::SQLException)
I've tried using different ways of using placeholders, but they dont work.
I have tried replaceing the placeholders with strings and then running the SQL query, and it works like it should.
I think you might know it is wrong with the two variables.
The controlling of variable is an important method for debugging.
So just print the table and id before executing the SQL, which may not be what you expect.

using TKPROF and EXPLAIN with a lowercase username

I am attempting to tune our oracle database that has been running a little slowly lately.
I have generated a SQL trace file, and can run the basic TKPROF from the command prompt, and generate the appropriate output file.
tkprof.exe source.trc output.txt
I would very much like to see the execution plan as well since there are a good number of indexes that should be used with this database. To do this, I am trying to run this:
tkprof.exe source.trc output.txt EXPLAIN=mbw/password
The problem is, the username which every application uses to connect with is lowercase (mbw for this example, and I have to leave it this way). So whenever I wish to look at data, I have to put quotes around the user like this:
SELECT * FROM "mbw".TABLE1
Unfortunately, I can never seem to get TKPROF to connect as "mbw"/password, it is always as mbw/password, which will never work. (I can see what TKPROF is attempting to connect with in the output.txt file)
I have tried a bunch of permutations on the command line, and just can't seem to make it happen. I've tried things like:
... EXPLAIN="mbw"/password
... EXPLAIN=""mbw""/password
... EXPLAIN="""mbw"""/password
... EXPLAIN=^"mbw^"/password
Does anyone have any ideas on how to properly structure the TKPROF command so I can connect as a user with lowercase letters?
I fully apologize for my lack of good oracle and sql skills, I have been rather unexpectedly thrown into this particular job and am trying to learn as fast as I can.

How to run an oracle query in linux with a table like output

I'm totally new in running sql queries in linux and I'm having a hard time dealing with it's output.
So I managed to access my database in oracle in linux and trying to run a simple query right now:
SELECT IN_01, OUT_BD_01 FROM TRANSLATION_ROW WHERE IN_01 = 'LS3K5GB';
I'm expecting it to be in a table-like output but instead i got this:
Any Help would be much appreciated. By the way, I'm accessing my Oracle server through putty. I don't know if that helps in anything.
--forgot to mention that I also use sqlplus. Don't know if that would make any difference
Thanks in advance.
Welcome to the weird and wonderful world of Oracle.
Viewing large amounts of data (especially "wide" data) through sqlplus has always been less than pretty. Even back in the 1990s Oracle rival Ingres had a rather nice isql which made a much better fist of this, although the flipside of that was using isql to spool to a data file (no headers and trimmings, etc) was slightly harder. I think the rather primitive nature of SQLPLus is why TOAD/SQL*Developer etc have become popular.
To make the output easier to read, you need to learn the basics of sqlplus formatting, in particular SET LINES, PAGES, TRIMSPOOL, TAB, and the COLUMN formatting command.
Use COLUMN to control the formatting of each column.
One possible option is to use SET MARKUP and spool to a file, which formats the output as HTML table, but then you need a HTML viewer/browser to view the results.
On PuTTY your options are limited, but if you have xterm and can invoke the browser on Linux, you might find something like a shell script:
#!/bin/bash
sqlplus un/pw #the_file
firefox the_output.html
Contents of the_file.sql:
SET MARKUP ON
spool the_output.html
SELECT * FROM user_objects;
spool off
quit
If you have a share between the Linux system where the the_output.html resides and can mount that on WIndows, you could run the query on Linux with MARKUP oN, spool to the share, then click refresh on the Browser.
Clunky, and not really what you want, but try it and see what you get.
It displays the entire column that's it.
You can format your column before running the query with the below:
e.g.: format my column to display 10 characters only
column IN_01 format a10
There are some basic configuration tricks that you should apply when using SQLplus. A basic set of parameters would be something like this:
set pagesize 50000
set linesize 135
set long 50000
set trimspool on
set tab off
All these should be placed in a login.sql file which should be in the directory you are launching sqlplus from.
This will solve your current problem, but for further reading I suggest checking out this page: Configuring sqlplus.

Pentaho kettle : Below delete doesnt seem to work in SQL script

Ive tried to execute below delete through SQL script in Pentaho Job, I get the error as
Unknown table 'a' in MULTI DELETE. Can somebody throw light on this. Is there any other way
to go around this?
DELETE a.* FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST a
WHERE EXISTS
(SELECT 1 FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST_3 b WHERE b.TM_EVENT_ID=a.TM_EVENT_ID
GROUP BY b.TM_EVENT_ID)
This is mysql right?
See similar solutions here - recommends removing the table alias.
Worth noting this is nothing to do with Pentaho, if you did it in a SQL client you'd get the same error. If you don't then the difference is probably in the jdbc driver version - may be worth checking that.
i can suggest these options:
dont use aliases
try this directly on your mysql and check if it works for you.
dont use pentaho like this : make a transformation and break apart the query to steps
with table input and lookup then delete the rows by row_id
its a little bit longer but a lot more undersrandable and easy to maintain.
"dont over optimize"