I have added six empty tables with SQL into pgAdmin. I have six csv files with the same columns and I am trying to add them in concordance with an entity relationship diagram that includes column names, and key information. 5 have imported relatively easily. Trying to work out a different error with the last. However, I am frequently getting this error:
internal server error: 'columns'
This error seems to occurs before the request to add the csv can even be created. when I look at the "columns" tab in the import/export utility, none of the columns in the csv I am trying to import appear. When I use
SELECT * FROM table;
I can tell that the table columns have been created with the right names. This error is confusingly inconsistent as sometimes when I drop and add a table, using the same code I did previously, it seems to appear and disappear without cause. I have tried editing the SQL that I use to create the tables, changing the order in which I import the tables, changing FK and PK around in different tables, and reinstalling different versions of PGAdmin.
I had the same issue and resolved it by refreshing the DB connection (right click DB > refresh)
I think it doesn't know that you have added the columns so it's trying to tell you to add them, so refreshing should fix the confusion.
Related
I am trying to write data to an Azure SQL DB with Azure Data Factory. I'm using a Copy Data Task within a For Each that looks through all the rows in an ETL table in the DB. In the pre-copy script, I have
TRUNCATE TABLE [master].[dbo].[#{item().DestinationObjectName}]
DestinationObjectName being the name of the table that is being loaded in the ETL table. The problem I'm having is that for some of the tables (not all, some work perfectly fine) I am getting the error 'Cannot find the object % because it does not exist or you do not have permissions'. The account I'm using has all the necessary privileges as well. I am able to see the script that is sent to ADF which I have copied into the DB and confirmed this script works sometimes but not every time. If I select top 1000 from the table in question and replace that object for the one in the truncate table script, it works. I'm really at a loss here. Like I said the truncate works for a majority of the tables but not all. I have also double checked that the object names are the exact same.
Any help is appreciated.
This issue has been solved. I had to drop the affected tables and remove the brackets surrounding each in the create table statements and recreate without the brackets. very strange issue.
I have a instance of Crate 1.0.2 and I dropped a table from it. Then re-created table with same name and slightly modified schema. Then I imported data using copy from command. File argument to copy from command consists of 10,000 records and copy from command runs ok. When I check table tab in crate web console, it shows many partitions added and each partition having few records. If I add number of records column on this tab, it comes close to 10k but when I fire a command "select count(*) from mytable", it returns around 8000 records only. On further investigation found that there are certain partitions on which data cannot be queried at all. Has any one seen this problem? Does it have anything to do with table drop and creation with same name ? I also observed that when a table is dropped, not all files related to that table are deleted from path.data. Are these directories a reason for those partitions become non-query able? While importing, I saw "Document already exists" exception. I know my data does not have any duplicate value for primary column.
Some questions to clarify the issue:
Have you run refresh table mytable after your copy command has finished?
Are you sure that with the new schema of the table, there are no duplicate records?
Since 1.x versions are not supported anymore, could you try with CrateDB 2.1.6 which is the current stable version to see if the problem persists?
I am importing data from Excel into an existing table in Access and want to suppress the below message.
I have tried using a multi field Index to import new records into the table and have also tried importing firstly to a Temporary table and then appending new records to the existing table.
However under both scenarios it still gives the below message pop up which I want to avoid the user seeing (as they could click yes by accident).
If I try SetWarnings = No in a macro, it just reimports all entries irrespective of whether they are duplicates or not so that doesn't work.
I would appreciate any help
Thanks
Don't import the Excel data, link them.
Now you have a linked table. Use that as source in a query where you join it with the existing table.
Select only linked records that are not already present.
Change the query to an append query. This query you can run as often as you like.
When a new Excel file is received, just replace the linked file with the new file.
This message will appear when trying to import data that violates an Index in the destination MS Access table. Check that your Excel column data does not violate the corresponding MS Access field index settings.
If the MS Access field is set to "Required" = Yes, Null values (empty cells in Excel) will also cause the message to appear.
That's two possibilities...
Using SSIS at run-time how do you create a destination table in SQL Server (with the right number of fields, names, data types etc.) solely based off a source table structure from Oracle? The reason for asking is my Oracle tables often changes structure (fields added, data types changed etc.) and my package breaks if there is a mismatch say between the number of source and destination table fields. I need to automate this as I am scheduling packages to run as jobs in SSMS. I have looked everywhere and tried everything but cannot find a suitable solution to this problem....however I'm sure it is possible...somehow.
I have an Excel spreadsheet with a few thousand entries in it. I want to import the table into a MySQL 4 database (that's what I'm given). I am using SQuirrel for GUI access to the database, which is being hosted remotely.
Is there a way to load the columns from the spreadsheet (which I can name according to the column names in the database table) to the database without copying the contents of a generated CSV file from that table? That is, can I run the LOAD command on a local file instructing it to load the contents into a remote database, and what are the possible performance implications of doing so?
Note, there is a auto-generated field in the table for assigning ids to new values, and I want to make sure that I don't override that id, since it is the primary key on the table (as well as other compound keys).
If you only have a few thousand entries in the spreadsheet then you shouldn't have performance problems (unless each row is very large of course).
You may have problems with some of the Excel data, e.g. currencies, best to try it and see what happens.
Re-reading your question, you will have to export the Excel into a text file which is stored locally. But there shouldn't be any problems loading a local file into a remote MySQL database. Not sure whether you can do this with Squirrel, you would need access to the MySQL command line to run the LOAD command.
The best way to do this would be to use Navicat if you have the budget to make a purchase?
I made this tool where you can paste in the contents of an Excel file and it generates the create table, and insert statements which you can then just run. (I'm assuming squirrel lets you run a SQL script?)
If you try it, let me know if it works for you.