After loading data with batch processing to Google BigQuery i'm expecting tables that are partitioned based on their date (I know, have to rename the column of date as it's reserved).
However, when receiving some new tables they have a different icon and show the number 1 behind the table in this format (1). See the image below.
What is happening here? I would expect them to be normal partitioned tables.
I can't find this anywhere in their documentation or the wider web.
// if you give a -1 at least tell me why? What is wrong with this question?
This number is shown there because it is probably that you used to have more tables with the same schema and the same name prefix, but now those tables are deleted. For example, here I have 6 tables with the same prefix and it shows (6).
When I open this table it appears that the I can select the other tables to show the schema or details as:
Related
I have a table in SQL Server that contains document-related data. Data entry in this table is handled from ASP.NET code:
I have to show this table data in PowerBI desktop. With normal Select query I could bind the data in table of Power BI desktop. But along with this table data I have to show one more column IsProcessedBefore which I have to compute for each row.
This column values could be '1' or '0':
1 indicates that this particular Document Number has been processed earlier
0 indicates that this particular Document Number has not been processed earlier.
Output should be shown in Power BI desktop report as shown below.
I am not getting how could I compute this indicator for already existing data.
Whenever a new entry is done in the SQL Server table. It must be reflected same in the PowerBI report along with the computed indicator value.
Please help me with this scenario.
One way that I found is to create another page in same Power BI report, showing the number of times the documents processed using group by query on documentnumber and documentId (Output Shown below). But this is just workaround. Using this approach we will be having two pages. One could check documentnumber from actual table data in below output and refer its count. If its count is greater than 1 that means it is processed earlier.
.
But this workaround does not match up with the exact requirement.
I have a database, which contains information that I can't share images of due to compliance reasons.
I have a table I need to copy data from, so I was using the following SQL:
INSERT INTO completedtrainingstestfinal (MALicenseNum)
SELECT MALicenseNum
FROM CompletedTrainings
WHERE (CompletedTrainings.MALicenseNum IS NOT NULL)
AND (CompletedTrainings.Employee = completedtrainingstestfinal.Employee);
It keeps popping up the Enter Parameter Value, centered on the new table (named completedtrainingstestfinal) at the Employee column.
Background: The original table is a mess, and this is to be the replacement table, I've had to pivot the table in order to clean it up, and am now trying to remove an ungodly amount of nulls. The goal is to clean up the query process for the end users of this who need to put in training and certification/recertification through the forms.
When you look in the old table, it has been designed to reference another table and display the actual names, but as seen in the image below it is storing the data as the integer number Employee.
The new table Employee column was a direct copy but only displays the integer, my instincts tell me that the problem is here, but I have been unable to find a solution. Anyone have any suggestions to throw me in the right direction?
Edited to add: It might be an issue where the tables have different numbers of rows?
This is the design view of the two relevant tables :
Table 1
Table 2
I need some help and I know I am not the only one to deal with this issue but I am wondering if you might have some ideas on how to handle the situation of comparing two rows of data filling out start and end dates.
To give you some context, we have a huge hierarchy (approx 8,000 rows and about 12 columns wide) that is updated each year. Sometimes the values change and sometimes they don’t. When the values don’t change, then I don’t need to adjust the dates. When the values do change and a new row is added, I need to change the data.
I have attached some fake data to try and illustrate my data. I am building this in MS Access, so I think this is more of a DBA type question that is going to be manipulated via a recordset type method.
In my example I have two tables – Old Table and New Table. In each table there is a routing code field that represents my join field and primary key for this table.
The Old table represents existing data - tblMain. The New Table represents the data to be appended - tblTemp.
To append the data, I have an append query set up in Access. I perform a left join between the Old and New tables, joining on every field and append the rows that are null in the Old table. That’s fine and that is not where my issue is.
What is causing me issue is how to fill out the start and end dates.
So as you can see from my tables, we are running a zoo. Let’s just say for the sake of the argument, our zoo started off pretty simple and has become more sophisticated. We now want our hierarchy to expand out and become a bit more detailed as we are now capturing the type of animal (Level 4) and the native location (Level 5).
As you can see when comparing one table to another the routing codes are the same, so the append query has to have a join on each field. When you do this, you return the Result Table which is essentially the Old and New tables stacked on top of each other. You might think about a Union query but this is going to give me duplicates and I don’t want that.
If you notice in the Result Table there is a Start and End Date. Let’s just say I get the start and end dates via message box that pops up upon the import of the data and is held in a variable. I think there are dates in my real data but still trying to verify this.
So how do I compare (pseudo code for the logic needed)?
• For each routing code:
Compare Levels 1-5
If the routing code is the same but Levels 1 -5 are not the same
fill out the end date of the old record
fill out the start date of the new record
This idea of comparing two records and filling out a data is quite prevalent in my organization but I haven’t found a way of creating the logic that consistently works so any help or suggestions would be appreciated.
Old Table
New Table
Result Table
Here's my situation. We have an application that uses sybase to store it's data on the back-end.
Please refer to my screenshots below for better understanding of what I'm talking about.
In our application, we have some custom "tabs" (columns in the db) which are supposed to contain data on the db tables. When opening and viewing in the application, these tabs contain data (so they must reside somewhere...), however when you query the table they're supposedly residing under, there's no data to be found. These columns should not contain all null values!
In this example, "trainer" is one of the columns which should contain data. I do a [sp_columns #column_name = 'trainer'] and see that it supposedly resides under table "user_tab_data" (screenshot 1).
Expanding user_tab_data in our sql browser, we see the data points we need to query (training date, training course, trainer, etc.), however when querying for the values, nothing comes up! We can see when opening the application that there is indeed data stored somewhere though.
Is there anything I can do to locate this? Am I missing something here? Any help is appreciated.
Thank you
sp_columns to find table name
query showing that table contains null values
Well this is embarrassing. I figured it out. There was just so many entries on the table that you could scroll for hours and not see any entries where the columns I wanted had any data.
I had to do 'is not NULL' to pull only the ones I wanted to see.
Duh.
I am having an issue with my Microsoft Access database. One of my tables looks completely blank, but it has 11632 records listed in the bottom. Take a look at this screenshot. Though the table shows up blank, when I run the query it pulls the correct data from this table, so I know the data is there, it is just not appearing for some reason. I have tried on Access 2013 and 2016 on a different computer, and both have the same effect. I have also tried compacting and repairing, and also exporting the table but the file it exports to also appears blank, aside from the field names. Any ideas on what I could try?
Thanks!
Turn your import into a 2 step process (or more...). Write it raw into a scratch pad table. Then fire an append query, that has the appropriate criteria to result in only valid records going into the final table.
This isn't unusual at all given that some outside data sources may not have the controls to prevent bad records. Sometimes one must "wash" the data with several different query criteria/steps in order to filter out the bad guys.