I have a problem using Pentaho 6 CE and Amazon Redshift as my datasource to create dashboard with parameterized query.
When I use parameter in datasource like
where city_id in (${cityid})
It returns an error (Not Implemented) and the console log show me this
Not Implemented
But if I hardcode the parameter value it works and shows the diagram with proper result. I've tried two different versions of Redshift JDBC (4.0 and 4.1) but still not working.
Related
I have developed some SQL that reads from a redshift table, does some manipulation (esp listagg some fields), and then writes to another redshift table.
When I run the SQL using SQLWorkbench it executes successfully. When I embed it in a Tableau Prep flow (as "Complex SQL") I get several of these errors: "System error: AqlProcessor evaluation failed: [Amazon][Support] (40550) Invalid character value for cast specification." Presumably these relate to my treatment of data types. What I don't is what is so difference in the environment that would cause different results like this? Is it because SQLWorkbench and Tableau Prep use different SQL interpreters? Or is my question too broad to even speculate without going through the actual code?
Best guess is that Tableau, which has knowledge of DDL, is add some CAST() operations to the SQL. SQLWorkbench is simpler and is pushing the SQL to Redshift as written. This is based on there being no explicit CASTs in your SQL but an error message that identifies a CAST().
Look at stl_querytext for these two queries and see if they are being given to Redshift differently by the two benches. I suspect this will give you some clues to go on.
If there are no differences in the SQL then the issue may be with user / connection differences and more info will likely be needed about the issue.
I am trying to create a Database check for all versions of oracle(11g,12c,19c) to check the credentials set on a user using dba_credentials.(something like a metric extension)
But dba_credentials is only available in Oracle 12c.
Is there any view in 11g i can use for this purpose using a 'case' in sql depending on the version?
Credentials as a separate type of object didn't exist in 11g. You set credentials as part of the dbms_scheduler package. If you want to view those credentials, you can query the dba_scheduler_credentials data dictionary table. Depending on what you're actually trying to accomplish, you may need to query that table in later versions as well since not everyone migrating to 12c will have switched over to the new way of managing credentials.
I'm running a complicated SQL script in Oracle SQL Developer. The query starts with
DEFINE custom_date = "'22-JUL-2016'"
While this works fine in Oracle SQL Developer I get an error in jetbrains:
<statement> expected got DEFINE
Also when I run the query it says:
ORA-00919: invalid function
even though it all works fine in Orace SQl Developer.
Is there anything specific I need to configure in Jetbrains Pycharm to be able to execute Oracle SQL queries correctly?
DEFINE isn't a core feature of the database, instead it's a command in SQL*Plus.
SQL Developer has as script engine which supports all of the SQL*Plus commands, including DEFINE, which is why it works when you run it there.
DEFINE just creates a variable and assigns a text value to it. You'll need to re-write your code to declare the variable and assign values to it instead.
Docs for DEFINE
I'm in the process of converting a Groovy script that updated a document from a MongoDB and saved it into MongoDB using the save method doc.save().
Now I want to use a SQL database using groovy.sql on the Crate Database but cannot use the ResultSet.CONCUR_UPDATABLE trick as mentioned in Question Updating SQL from object with groovy from 2013 (Crate does not support it).
Is there another "Groovy" way of storing an updated row (from a single table) into an SQL database? What I'm looking for is something like row.update() or an algorithm that constructs an UPDATE sql command from a GroovyRowResult or Map.
I'm working on some reports and we're halfway through migrating from Oracle to SQL Server.
The reports I'm migrating are using some user-defined functions from the Oracle schema so the rest of my new translated code obviously does not work with them.
Within Report Builder 3.0 I have access to the data source, how can I provide access to the schema so the functions still work?
I'm sorry if that isn't very clear.
I would try to build a dataset pointed at the Oracle schema that calls the user-defined functions and returns their results, together with the input parameter column(s). This dataset will need to return a row for every required combination of input parameter column(s).
Then in textbox expressions, I would use the SSRS Lookup function to return the function results from the Oracle dataset.