Big Query 'internal error occurred...' on valid query to create view - sql

I have a valid SQL query in Big Query which executes fine when run. However, Big Query tells me 'an internal error occurred and the request could not be completed' under the validator and as such I cannot save the query as a view.
Why is this? It's very frustrating and it hasn't happened with other queries I've written. I've double checked all the schema references are fully defined etc. It's been happening for the past 3 days now.
Many thanks for any help.

Related

Statement object has been closed in querying from Amazon Redshift

On attempting to execute a simple query on a table (dimensions: 1,131,714,069 rows by 22 columns), I am running into the error:
[Amazon][JDBC](12080) Statement object has been closed.
Research online has unfortunately not provided much insight into this error.
I will not encounter this error each time I execute a query; so far it seems that its occurrence is unpredictable. The query that most recently caused this error looked was a very simple SELECT ... FROM ... WHERE with no subqueries and only one condition in the WHERE clause.
The query was busy for about 22 minutes before failing, however after waiting a few minutes, then running it again, it completed successfully in a matter of seconds. That being said, this kind of unpredictability and unreliability is exactly what I'm trying to prevent against.
If it helps, the IDE that I am using to connect to my Redshift database is TeamSQL.
What could be causing this error, and what steps could I take to prevent it?

"Invalid snapshot time" error without table decorator usage

We got an error that {"message":"Invalid snapshot time 1472342794519, unable to read before 1472342794785","reason":"invalid"}. Other QA describes that such an error happens when table decorators' parameters are invalid, however, our query does not have table decorators.
The query uses TABLE_DATE_RANGE, but its arguments are date timestamp, so the lower digits must be 0s, not like that in the above error.
Retrying the same query succeeded.
I can provide the job ID, but because it includes internal information of our company. I apologize that I cannot directly write it here.
The tables that the TABLE_DATE_RANGE wildcard evaluates to are resolved as of the time of the start of the query. Looking at the timestamps, it looks like the table was deleted right after the job started execution. This causes the table resolution to throw that error.

Error "Arithmetic operation resulted in an overflow."

I'm tasked to create a program that will run an extremely long query. That query executes well in Oracle but whenever I try to run it from VB.Net, it results to the error mentioned on the title. And also, I have noticed that when I copy my query to an SQLDataSource, it copied only certain parts and not the whole query. Is there any chance for this? Thank you!

Strange result when running query over a table created from a result of another query

Since yesterday, 1-09-2012, I can't run any queries over a table that has been created from the result of another query.
example query:
SELECT region FROM [project.table] LIMIT 1000
result:
Query Failed
Error: Field 'region' is incompatible with the table schema.
49077933619
These kinds of queries passed successfully every day, last couple of weeks. Has anybody else encountered a similar problem?
We added some additional schema checking on friday. I was unable to reproduce the problem but I'll look into your examples (I was able to find your failed job in the logs). I'm in the process of turning off the additional schema checking in the meantime. Please try again and let us know if the problem continues.

SQL error:8152, but not over max?

I'm part of a team writing an ERP using , Seam, and Jboss, and on one of my pages, I keep getting an SQL error: 8152 whenever I try to input something. SQL error:8152, for those of you who don't know, is when you try to input a value over the maximum limit of the column.
I've double checked my entity and the database, and their maximum value limits are the same (50 nvarchars). In addition, I'm pretty sure that we're not using audit tables. I then put System.out.println(""); all over the place, and found that the error was happening in between these two println(s):
System.out.println("Flushing");
entityManager.flush();
System.out.println("Flushing complete");
Which is part of a method that process all changes to the table. But I'm pretty new to programming and not sure what's going on.
Any help would be appreciated, thanks in advance, Jeff.
P.s. Code on request, but I didn't post it because there is a lot of it all over the place.
I would verify the SQL that is being executed when the flush() is performed. That way you can see the length of your data and verify that it is too big as shown by the DB error.
If you are using Hibernate, you can output SQL to the console. You don't say what your DB is, but if it's SQL Server you can use the profiler to see what SQL is being executed.