I ran a query in Google BigQuery that is a basic Select * From [table] where [single column = (name)]
The results came out to about 310 lines and 48 columns, but when I try to save the results in ANY format, nothing happens.
I've tried saving as a view AND a table, which I can do just fine, but trying to download the results, or trying to export the results to GCP, fails every time. There is no error, no notification that something went wrong, literally nothing happens.
I'm about ready to yank out my hair and throw my computer out the window. I ran a query that was almost identical except for the (name) this morning and had no issue. Now it's after 4pm and it's not working.
All of my browsers are up to date, my logins are fine, my queries aren't reliant on tables that update during that time, I've restarted my computer four times in the hope that SOMETHING will help.
Has anyone had this issue? What else can I do to troubleshoot?
Do you have any field of RECORD (REPEATED) type in your results? I had a similar problem today, trying to save my results to Google Sheets - literally nothing happened, no error message whatsoever - but fortunately (and quite puzzlingly) - I got this error while trying to save them to CSV on Google Drive instead: "Operation cannot be performed on a nested schema. Field: ...". After removing the "offending" field, which was of RECORD (REPEATED) type, I was able to save to Google Sheets again.
Related
We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.
I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.
Unable to save query results to a bigquery table
I have been facing the issue for a couple of days now and its annoying. I have already tried multiple query locations in EU (due to data privacy regulations, I am limited to EU) and could never seem to save data. Also not getting any errors too.
So after reading 9 posts about the same error on Stack I didn't really find an answer to my problem, hence me posting this question.
So in my Access I have a query that inherits from several other queries. With this query a User can get information with a certain Date range. This data is put in a report where the records are shown and internally (in the report) some calculation is done of the record fields.
The problem is, when A user wants data until the date 2016-4-22 the report shows up fine. Though when the User wants data until the date 2016-4-30 (from day 23 and on it just gives the same errors) the report tries to load but after a while fails with the error:
This expression is typed incorrectly or it is too complex to be evaluated
So this made me test a few things which I have list down below:
First I thought the Query itself or the given WHERE clause were corrupted / not good so I put the Report recordsource with the WHERE clause in a separate test query and executed it. This all worked fine (no errors).
This made me believe that perhaps a single record was corrupted so I went to the date from which on it gave the error (2016-4-23) and checked this record. Nothing weird was found.
Then I thought it could be some field on the report that was giving the error so I removed several fields and after removing a few Calculated Sum(total) fields the error disappeared.
So I created another query and did the Sum in this query as well based on my created test query - this gave the same error.
After these tests I was pretty sure that perhaps the calculated data was just too much to calculate. I also copied the queries data into Excel and did a SUM of the field there as well. The outcome was roughly 15 million. I can hardly imagine that a number of 15 million is too complex to be evaluated...
Besides that, I also tried to create another SUM query on the specific field but with the date based on 2016-4-22 (so the one that initially DID work). But now it also givers the error on this date... So there goes my theory..
Anyone has a clue what is going wrong?
The 'base' query in question is:
SELECT qryOHW6Verdicht.*
, CStr([nummer] & "." & [qryOHW6Verdicht].[positie]) AS NummerPos
, [ArbeidTot]+[MateriaalTot]+[UitbesteedTot] AS Kosten
, CDbl([AangenomenprijsPositie])+CDbl([MeerwerkPrijsPositie]) AS OpdrachtSom
, IIf([nacalculatie],[Kosten],IIf(IsNull([kostenverwacht]),(100-[otwinstpercentage])/100*[opdrachtsom],[kostenverwacht])) AS VerwachteKosten
, IIf([nacalculatie],[kosten],IIf([Kosten]<[VerwachteKosten],[Opdrachtsom]-[VerwachteKosten],[Opdrachtsom]-[Kosten])) AS VerwachteWinst
, [Kosten]-[VerkoopTot] AS WaardeOHW
, DatePart("yyyy",[gereeddatum]) AS GereedJaar
, [OtOverhead]*[VerkoopTot]/100 AS Overhead
, qryVCBoekingenVerdicht.KostenVerwacht
FROM qryOHW6Verdicht LEFT JOIN qryVCBoekingenVerdicht
ON (qryOHW6Verdicht.[Positie] = qryVCBoekingenVerdicht.[Positie])
AND (qryOHW6Verdicht.[Nummer] = qryVCBoekingenVerdicht.[Offerte])
The report opens with the previous query as recordsource and with this WHERE filter:
1=1 AND (Gereeddatum is null OR Gereeddatum>=#2016-4-30# )
AND Project IS NULL AND NOT (WaardeOHW=0 AND Kosten=0) AND GarantieOrder=0
In the report I got a few Calculated fields and one of them that goes wrong is:
=Sum([OpdrachtSom])
I am experiencing a problem with mirrored datasets. This situation occured because the data model was switched a few months past and I just got recently assigned to this project, which already had a new application and data model done.
I was tasked with importing all the data from the old MS Access application to the new one and here's where the error has its source. The old data model was written in a way that every dataset was also stored as its mirrored counterpart. Imagine a database table like this:
pk | A | B
1 | hello | world
2 | world | hello
I imported the data via a self made staging process via Excel and VBA coding and that worked fine. The staging was necessary because I wanted to create insert statements and therefore had to map all the old IDs, names, ... to the news ones.
While testing the application after the import was done, I realized that the GUI showed all datasets twice. (The reason for it being shown twice and not once and then once again in mirrored form, is the way we fill the ListBox that shows the results)
I found the reason for that error in the mirrored data and now would like to get rid of it. The first idea I had is rather long and probably over-complicated, that's why I am posting here, in hope of finding a shorter solution.
So, my idea is as follows and would use solely VBA coding:
Filling recordSet with a SELECT * FROM mirroredDataTable
Write a SQL-Statement and check if the recordCount of that statements result is >1 for each record in the recordSet from 1.)
If the resultCount is >1 then one of the IDs in that result is written into a new recordSet or Array
The recordSet / array from 4.) is parsed again and for each ID in there I create a DELETE statement
???
profit
Now I already have an idea for the SQL statement in 2.), but before I begin I'd just like to ensure that there is no "easy" way that I haven't considered yet or just have overlooked.
Would greatly appreciate any help/info/tips you can provide.
PS: It is NOT an option to redesign the whole data model or something among the lines of this (not my decision)
Thanks to #Gord Thompson I was able to solve this issue on a purely SQL basis. See the answer of this subthread for the detailed solution: How to INTERSECT in MS Access?
This one has got me stumped but good. Using VBA in MS Access, I sometimes get different results when running the same code against the same table. I can run the code 2, 5, 6, 10 times and get the same results, then run it again and get a different result. I can run the code twice and get the same results and then I can run the code twice and get different results - all with the same code against the same table.
The code is used to group trips so they can be billed correctly. I do this by taking the raw SQL data and putting it into an Access table, then via several sorts and some cross-checking, I label each trip in the access table with a GR or an ML in the last field of the table. The result set is all trips for the specified time frame which are now labeled: ML (multi-loaded), GR (Grouped) or blank (demand).
I have even tried putting in MoveLast/MoveFirst to make sure the table is fully loaded each time (per suggestion from others).
Here is a link to the code and data after 2 runs of the same code on the same data:
Code&DataI removed the trip ID and client ID data for privacy concerns. The trip ID is unique but the client id will be used many times depending on how many trips the client took during the time period.Any and all help you can give to make this code produce the same results each time it is run is GREATLY appreciated. I don't want to have to go back to doing this report labeling by hand. This is the smallest of 4 that must be done twice a month.Thanks!David R. Mohr.................................................end of line........................................................................
When opening t_BillableTrips, I do not think it is safe to assume that the data will be sorted the way you want. That could potentially change from run to run. I would suggest to use a query with an explicit sort order instead of opening the table directly. My second suggestion is to use the Recordset Clone method to get Intable2 and Intable3. The recordsets will be sharing the same underlying in memory data but will be able to be positioned at different records.
First of all, Thanks to everyone that gave an answer. Your prompts guided me to find the solution.
1st problem is that I was directly editing the TABLE while being under the impression that my MAKE TABLE ORDER BY command was actually creating a table in the order I specified - it only worked most of the time and we can't have that.
So, after digging deeper I found more and more evidence that trying to sort the actual table - especially with a MAKE TABLE command is not good practice and can give unpredictable results as well as generates a lot more overhead. I am now basing my positioning and updating on a QUERY of the table and not the actual table. I.E. changed this:
Set InTable = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable2 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable3 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable4 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
to this:
Set InTable = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable2 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable3 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable4 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
So far, this seems to have fixed the problem and, of course, the proc runs much faster since it does not have to create the table twice to run/update for two different sorts.