DBT test error dbt internally failed to run test - dbt

I wrote a simple generic dbt test and I get error saying dbt internally failed to execute test.dbtlearn.no_nulls_in_dim_listings_cleansed: Returned 0 rows, but expected 1 row
I did not find any solution for this. Anyone has any solution for this?

It might be that you’re running an earlier version of dbt. Originally tests returned a count of failing rows (ie one row with a single value) but now they need a set of failing rows (one row per failure).
It sounds like your dbt version is expecting the former when your test is written for the latter, I’d try modifying your test or even better upgrading to a more recent version of dbt.

Related

SqlException: Data modification failed on system-versioned table because transaction time was earlier than period start time for affected records

I m getting the above error when running the Web Job in multi-threaded environment. I m calling one stored procedure to perform some action, stored procedure has code which Inserts/Updates/Delete records from pretty big temporal tables(3-4M records[not sure if its relevant here]). Every time the job is run it deals with(Insert/Update) around 40K-80K records based on condition. When the single thread is running everything goes fine. But as soon as number of parallel jobs count is set to 2 or more I m getting the error. From initial analysis seems like issue is with Auto generated column values with for SysStartTime and SysEndTime in history table. I have tried one of the solution from internet to reduce 1 second from the date to be saved in those columns as below
DEFAULT (dateadd(second,(-1),sysutcdatetime()))
But its not working. I have read few articles where it says temporal tables does not work properly in multi-threaded environment. Now I m not sure why the issue is happening and how to resolve this in multi-threaded environment.
Can someone here please help me understanding the reason behind the error and how to fix it.
NOTE: I can't make my code to run on single thread. Minimum three threads are required. Converting to single thread is not solution in this case.

MiniProfiler has an empty parameter list

Working with the latest stable version of MiniProfiler (3.2) I am having an issue where the parameters list for the Command is empty. The SQL of the command is printing out fine through MiniProfiler but the parameters are not actually being removed.
The output of the SQL (as an example) is showing and I believe executing as follows:
Select person_ID, first_NME, last_NME from Customer where customer_Id = #p0
when the query executes I am getting an error that states: Must declare the scalar variable "#p0
I am able to debug and look at the DBCommand for miniProfiler and it does not have anything in the parameter lists.
Has anyone come across this before? I have already tried setting the SQLFormatter but I don't think that is helping because i don't have any parameters.
If you get an error like this:
Must declare the scalar variable "#p0"
That's coming from the ADO.NET driver underneath MiniProfiler (from whichever database you are connecting to - I'm assuming SQL Server here but that applies to all). MiniProfiler's parameter list should be showing empty because it's actually empty, which is the same source of the exception.
But, if you're still seeing this without MiniProfiler and it's somehow interfering...that I'm very interested in.
Note: a lot of this has been rewritten in MiniProfiler v4, currently available in beta on NuGet. After giving it a test this week on Stack Overflow, if all goes well it should see a 4.0 RTM soon after. If you find bugs with v4 please drop me an issue at: https://github.com/MiniProfiler/dotnet/issues and I'll take a look ASAP.

Pentaho Execute SQL Statements variable conversion to null

I am using PDI to delete and insert some data from a DB. I have the following issue. I create two variables called START_DATE and END_DATE that are used to select the data that will be deleted from my DB. I am able to get them and run my transformation with no erors in the log file, but when I checked if data was deleted, I find it didn't. I send checked my "DeleteProcedure" step, and it says "Conversion error: null". I have tried different approached to take the variables and pass them as Strings, but I haven't been able to solve this issue. It cannot be a SQL mistake as I tested it with a constant and it works.
Any ideas? I attach some pics. Thanks!
As a documentation of the Execute SQL script says:
Note: When you have an issue, that the SQL is started at the initialization phase of the transformation and not for each row, make sure to check the option "Execute for each row" (see description below).
In your case it executes during the initialization phase of the transformation that's why it gets null values instead of ones from previous step.

Strange result when running query over a table created from a result of another query

Since yesterday, 1-09-2012, I can't run any queries over a table that has been created from the result of another query.
example query:
SELECT region FROM [project.table] LIMIT 1000
result:
Query Failed
Error: Field 'region' is incompatible with the table schema.
49077933619
These kinds of queries passed successfully every day, last couple of weeks. Has anybody else encountered a similar problem?
We added some additional schema checking on friday. I was unable to reproduce the problem but I'll look into your examples (I was able to find your failed job in the logs). I'm in the process of turning off the additional schema checking in the meantime. Please try again and let us know if the problem continues.

ORA-01001: invalid cursor

I am getting an oracle error ORA-01001: invalid cursor in the production where a number of transactions are processed in bulk. However the same code works fine in development.
I need to know when can one have ORA-01001: invalid cursor in an update query. I did some googling and found that there are two possibilities of getting this error:
Numbers of cursors opened becomes greater than MAXCURSOR permitted?
An attempt to fetch is made without opening a cursor.
Has anyone faced the same problem I had described above? Please suggest solutions.
Yes, these are the common causes (see also this if you don't have already).
Considering you are using two different environments (dev/prod) have you verified that the MAXCURSOR parameter is the same (or that Prod MAXCURSOR > Dev MAXCURSOR)?
You should also investigate your batch process and see if the number of data could cause your process to open more cursor in prod. Example: your batch launches a stored procedure for every department code in a departments table, and every instance of this procedure opens N cursors.
If you have - say - 3 dep. codes in dev because it is enough for your tests, and 34 department codes in Prod, you could use 10 times the cursor and get in the same situation...