Declaring a table variable with database name throws the following error.
The type name 'dbname.dbo.TableType' contains more than the maximum
number of prefixes. The maximum is 1.
Declare #cutoffDtes as dbname.dbo.TableType
However, the same works when I do the following
use dbname
Declare #cutoffDtes as dbo.TableType
Is there a way to declare the variable along with database name?
The documentation is pretty clear (once you find the reference) that user defined types are available only within a single database:
Using UDTs Across Databases
UDTs are by definition scoped to a single database. Therefore, a UDT defined in one database cannot be used in a column definition in another database. In order to use UDTs in multiple databases, you must execute the CREATE ASSEMBLY and CREATE TYPE statements in each database on identical assemblies. Assemblies are considered identical if they have the same name, strong name, culture, version, permission set, and binary contents.
In other words, you can repeat the definition in other databases and if everything is the same, then they are compatible.
Related
I'm using DBeaver to write script for my PostgreSQL database.
I have a PostgreSQL DB with Tables autogenerated by C#/EFCore (Microsoft ORM) - I receive SQL Error [42P01] if I don't add double quotes around table names when I cut and paste my ORM queries to DBeaver. I got [42703] for fields without double quotes. I do not have to add double quotes in C# code but it appears to be required in DBeaver?
example:
select * from Dnp3PropertyBase => SQL Error [42P01]
select * from "Dnp3PropertyBase" => OK, all results shown...
Does anybody know if I can change a parameter in DBeaver somewhere in order to enter table names and fields without double quotes?
Note: Using DBeaver 22.3.2 (latest on 2023-01-11)
Update After reading: Postgresql tables exists, but getting "relation does not exist" when querying
show search_path => public, public, "$user"
SELECT * FROM information_schema.tables => All tables are in public schema
SELECT * FROM information_schema.columns => All columns are in public schema
Question: How to be able to cut and paste my EFCore generated queries from Visual Studio output window to DBeaver query without having any errors regarding table names and field names?
First let me copy #a_horse_with_no_name comment:
Unquoted names are folded to lower case in Postgres (and to uppercase
in Oracle, DB2, Firebird, and many others). So SomeTable is in fact
stored as sometable (or SOMETABLE). However quoted identifiers have to
preserve the case and are case sensitive then. So "SomeTable" is
stored as SomeTable
Many peoples recommended me to go with snake case which I didn't want to go with initialy because all tables were auto generated by EF Core (Microsoft C# ORM). I told myself that Microsoft would do standard things. Microsoft use the exact "class" name in code as the table name , by default. That appears to me very logical in order to stay coherent and apply the same rules everywhere. C# recommended to use Camel case for classes so each table names end by default in Camel case instead of snake case.
PostgreSQL seems to promote users to use snake casing because they lower case every non double quoted names. According to a_horse_with_no_name, and I think the same, only PostgreSQL has the behavior of lower casing down every table names and field names which are not double quoted in SQL script. That behavior (changing casing for non double quoted names) appears to me as being very limitative. It also has hidden effect that could be hard to find for non initiated peoples coming from other DB world.
According to PostgreSQL doc, they recommend to use nuget package (.UseSnakeCaseNamingConvention()). It probably works fine for TPH (table per hierarchy) which is recommended by Microsoft for performance. But it does not works for table name for TPC (table per class) because of actual bugs in EFCore 7 (see Github project).
I received that message at the end of "update-database":
Both 'WindTurbine' and 'ResourceGenerator' are mapped to the table
'resource_generator'. All the entity types in a non-TPH hierarchy (one
that doesn't have a discriminator) must be mapped to different tables.
See https://go.microsoft.com/fwlink/?linkid=2130430 for more
information.
PostgreSQL doc : TPH supported OK but not for table in TPC (2023-01-12). I use TPC then I had to force each table name directly through TableAttribute.
My solution For table name, I use snake casing by manually add a "Table" attribute to each of my classes with the proper name like this sample:
[Table("water_turbine")]
public class WaterTurbine : ResourceGenerator
For fields, I use the EFCore.NamingConventions NugetPackage which works fine for fields names. Don't forget that if you have 2 classes mapped to the same object, it is because you are using TPC and did not force table name through TableAttribute.
This way all my table and fields names are snake casing and I can cut and paste any query dumped in my debugger directly in any SQL script window of DBeaver (or any SQL tool).
Is there any advantage of using standalone table type vs the table type we create inside package spec or body in terms of efficiencies apart from below differences:
Standalone Table Type can be used in multiple places. Package table type can be used inside the package. Some may argue that we can create a common package spec and use the table type from package spec.
additional db object to maintain for standalone table type
If you mean something like this for example
type some_type is table of number;
then storing it in a separate package can help you to keep it together with another logically related objects. And giving access to those objects will be easier because you don't need to specify each object but just give access to the package.
And keeping it private for a package as you've mentioned in 1. can also be a reason why one can deside for using a package
Some operations require the type defined as global type, e.g.
select ...
into ....
from TABLE(<your table type variable>);
does not work with locally defined type (at least this was the case in earlier Oracle versions)
I want to be able to send emails using SSIS. I followed the instructions at "How to send the records from a table in an e-mail body using SSIS package?". However, I am getting an error:
Error: ForEach Variable Mapping number 1 to variable "User::XY" cannot be applied
while running the package. My source table has 5 columns (bigint, datetime, nvarchar, nvarchar, nvarchar types).
Another error is:
Error: The type of the value being assigned to variable "User::XY" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object.
What could the problem be?
UPDATE: As I was trying to find out the problem, I have done this: while taking the data from Execute SQL Task, I cast the int data to varchar and then use the variable with String data type and it works. But how should I set the variable so it has INT data type, not varchar?
I just ran into this problem & resolved it, although I don't know exactly how.
Running SSIS for SQL Server 2008 R2:
a) query pulls rows into an object
b) for each trying to loop through and pull values for the first two columns into variables
(this had already been running fine--I had come back to edit the query and the for each loop and add an additional variable for branching logic)
c) error mapping variable '1', which happened to be an int and happened to have the same name as the column I was pulling from.
I tried deleting the variable-to-column reference in the foreach loop and re-adding it, and I discovered that the variable was not listed anymore in the list of variables allowed for mapping.
I deleted the variables, created a new variable of the same type (int32) and name, added it, and things ran fine.
A task assigned to us by our professor states that we need to do the following:
Submit an sql script file containing your SQL statements to the following questions
CREATE A COLLECTION
CREATE ALL THE TABLES FROM ASSIGNMENT 1 SOLUTION UPLOADED TO BLACKBOARD
ADD THE PRIMARY KEYS AND FOREIGN KEYS TO THEM
INSERT (MINIMUM OF 3 RECORDS) TO EACH TABLE
UPDATE AND DELETE (MINIMUM OF 1 RECORD) FROM EACH TABLE
However, in non of the lectures has he used the term collection, I've always heard library and some other stuff. What is a collection?
I am using notepad++ and set the language to SQL, and I typed in
CREATE COLLECTION, however, create highlights in blue but collection does not have a colour assigned to it (nor does Library).
When I tried googling for an answer, I got this from IBM
"An SQL collection is the basic object in which tables, views, indexes, and packages are placed"
So a collection would just be a library wouldn't it?
so if that's the case, then in iSeries (AS/400) I would type on the command line
CREATE COLLECTION ASSIGN1
but in a script would that be the same thing?
Thanks for your time.
EDIT
My professor sent me this as an example, a .sql file that opens in a program from iSeries called "Run SQL Scripts", however, he didn't explain anything, just sent me this as an example.....so is it safe to assume Collection is the same as creating a Library?
CREATE COLLECTION FARA042;
CREATE TABLE FARA043.EMPLOYEE (
EMP_NUM VARCHAR(10) CONSTRAINT FARA043.EMPLOYEE_PK PRIMARY KEY,
EMP_FNAME VARCHAR(50),
EMP_LNAME VARCHAR(50));
SELECT * FROM FARA043.SYSTABLES;
SELECT * FROM FARA043.SYSCOLUMNS
WHERE TABLE_NAME = 'CHARTER';
You are correct. On IBM i (formerly known as iSeries, System i) the terms Library, COLLECTION, and SCHEMA all refer to the same thing. IBM now uses the term SCHEMA instead of the term COLLECTION, to conform to newer SQL standards, but they are synonymous. However, the term COLLECTION has been deprecated, and therefore should no longer be used.
There are however some subtle differences between CRTLIB and `CREATE SCHEMA' (or CREATE COLLECTION).
The CL command CRTLIB allows you to specify the description of the library, just as any IBM i object has an object description. You can also specify whether the library is to be treated as a *PROD or *TEST library when someone is debugging. On IBM i, when a developer starts debugging, one of the settings is a safety feature indicating whether the session will be allowed to update files (tables) in a *PROD library or not.
The SQL CREATE SCHEMA statement, on the other hand, not only creates a library, but sets it up with catalog views and automatic database journalling (logging).
Once you have created a schema in SQL, you can return to CL and use the CHGLIB command to set the library type and description, thus having the benefits of both methods.
One other difference, the SQL CREATE SCHEMA statement will allow you to create schemas with names longer than the IBM i 10-character standard. If you do this, I strongly suggest that you also give it a valid 10-character OS object name, by using the FOR SYSTEM NAME clause, otherwise the OS will then be forced to generate a 10-character library name.
Part of a reporting toolkit we use for our development is configured to always use the same schema (say XYZZY).
However, certain customers have stored their data in a different schema PLUGH. Is there any way within DB2/z to alias the entire schema XYZZY to refer to the objects in schema PLUGH?
The reporting toolkit runs on top of ODBC using the DB2 Connect Enterprise Edition or Personal Edition 9.1 drivers.
I know I can set up individual aliases for tables and views but we have many hundreds of these database objects and it will be a serious pain to do the lot. It would be far easier to simply have DB2 auto-magically translate the whole schema.
Keep in mind we're not looking for being able to run with multiple schemas, we just want a way to redirect all requests for database objects to a single, differently named, schema.
Of course, if there's a way to get multiple schemas on a per-connection basis, that would be good as well. But I'm not helpful.
I am guessing that by DB/2 schema you mean the qualifying name in some two part object name. For
example, if a two
part table name is: PLUGH.SOME_TABLE_NAME. You want to do define XYZZY as an
alias name for PLUGH so the reporting program can refer to the table as XYZZY.SOME_TABLE_NAME.
I don't know how to directly do that (schema names don't take on aliases as far as I am aware).
The objection you have to defining individual alias names
using something like:
CREATE ALIAS XYZZY.SOME_TABLE_NAME FOR PLUGH.SOME_TABLE_NAME
is that there are hundreds of them to do making it a real pain. Have you thought about
using a SELECT against the DB/2 catalogue to generate CREATE ALIAS statements for
each of the objects you need to refer to? Something like:
SELECT 'CREATE ALIAS XYZZY.' || NAME || ' FOR PLUGH.' || NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR = 'PLUGH'
Capture the output into a file then execute it. Might be hundreds of commands,
but at least you didn't have to write them.