How to find the sequence used for AUTO_INCREMENT? - sql

Before you answer, let me emphasize that this is a question related to the Ingres RDBMS .
As many other Ingres users who complained about in the past on forums, I also experience the access issue when AUTO_INCREMENT is used... I need to find out the sequence used for the AUTO_INCREMENT field, so I can grant access to it in order to prevet some annoying exceptions...
Yes, when exception is thrown (JDBC) I get the name of the sequence in question, and I can fix it. But in the case I have bunch of tables, I may want to fix them all with a script.
How to find what sequence is used? (I mean its name)
Similarly, how to find out in what table is certain identity sequence used?
Example: $iiidentity_sequence_0012936

Try this
SELECT table_name,column_name, column_default_val
FROM iicolumns
WHERE column_always_ident = 'Y'
OR column_bydefault_ident = 'Y'
ORDER BY 1,2

Related

Why does the id serial primary key keep changing?

I have just started a full stack web developer course which includes PostgreSQL. I have been give some practice questions to do and when I clicked on run SQL it displays the id, first_name and last_name but when I entered in more lines of code to answer more questions and clicked on run SQL again, the id number changed to a completely different number and I don't understand why this is happening.
In the practice questions I was instructed to add more rows and then to update the entry with an id of 2 to something else but how can I update id 2 if the id numbers keep changing? id 2 wasn't even on the screen. What I understand of id serial primary key is that it auto increments the id when you add new rows but in this case the id keeps changing to random numbers, why does it do this? The screenshots are code from the course, not what I entered. http://sqlfiddle.com/#!17/a114f/2 this is the link but I am not sure if you anyone who has not signed up to the course can access it. Sorry if this is a really simple newbie question but I have spent a lot of time looking online and I really need to move forward.
As far as I can tell this is a bug in SQLFiddle.
Apparently the table definition (or something else) is shared with other users. If you do the same e.g. using db<>fiddle you always get the same ID after dropping and re-creating the tables:
db<>fiddle demo
SQLFiddle has never worked reliably for me anyway. Plus it seems to be stuck on a really old Postgres version. So you might use something different to practice your SQL skills or do your homework.
Like a_horse_with_no_name, I too prefer db<>fiddle for SQL code sharing. But if you're restricted to sqlfiddle for whatever reason, you can add a setval() command to your code to force the seeding value.
select setval('drivers_id_seq',1);
INSERT INTO drivers (first_name, last_name) VALUES ('Amy', 'Hua');
SELECT * from drivers;
See example here (link). Note that drivers_id_seq is a system-generated name that you can guess pretty easily (should you need to reseed the serial you create on another object).
In SQL Fiddle every time you click the button on the right its going to "rerun" your code.
However SQL Fiddle doesn't guarantee the primary keys are going to be the same, and isolates your code in such a way that definitely is causing the pk to increment.
http://sqlfiddle.com/#!17/a114f/2 Here's the original fiddle, and if you just jam on that submit button you can see the value changing each time.
Nothing in your code prevents duplication from concurring if you submitted it multiple times, but that would always have more than one row in the discover table.

'-999' used for all condition

I have a sample of a stored procedure like this (from my previous working experience):
Select * from table where (id=#id or id='-999')
Based on my understanding on this query, the '-999' is used to avoid exception when no value is transferred from users. So far in my research, I have not found its usage on the internet and other company implementations.
#id is transferred from user.
Any help will be appreciated in providing some links related to it.
I'd like to add my two guesses on this, although please note that to my disadvantage, I'm one of the very youngest in the field, so this is not coming from that much of history or experience.
Also, please note that for any reason anybody provides you, you might not be able to confirm it 100%. Your oven might just not have any leftover evidence in and of itself.
Now, per another question I read before, extreme integers were used in some systems to denote missing values, since text and NULL weren't options at those systems. Say I'm looking for ID#84, and I cannot find it in the table:
Not Found Is Unlikely:
Perhaps in some systems it's far more likely that a record exists with a missing/incorrect ID, than to not be existing at all? Hence, when no match is found, designers preferred all records without valid IDs to be returned?
This however has a few problems. First, depending on the design, user might not recognize the results are a set of records with missing IDs, especially if only one was returned. Second, current query poses a problem as it will always return the missing ID records in addition to the normal matches. Perhaps they relied on ORDERing to ease readability?
Exception Above SQL:
AFAIK, SQL is fine with a zero-row result, but maybe whatever thing that calls/used to call it wasn't as robust, and something goes wrong (hard exception, soft UI bug, etc.) when zero rows are returned? Perhaps then, this ID represented a dummy row (e.g. blanks and zeroes) to keep things running.
Then again, this also suffers from the same arguments above regarding "record is always outputted" and ORDER, with the added possibility that the SQL-caller might have dedicated logic to when the -999 record is the only record returned, which I doubt was the most practical approach even in whatever era this was done at.
... the more I type, the more I think this is the oven, and only the great grandmother can explain this to us.
If you want to avoid exception when no value transferred from user, in your stored procedure declare parameter as null. Like #id int = null
for instance :
CREATE PROCEDURE [dbo].[TableCheck]
#id int = null
AS
BEGIN
Select * from table where (id=#id)
END
Now you can execute it in either ways :
exec [dbo].[TableCheck] 2 or exec [dbo].[TableCheck]
Remember, it's a separate thing if you want to return whole table when your input parameter is null.
To answer your id = -999 condition, I tried it your way. It doesn't prevent any exception

Behavior of Selecting Non-Existent Columns [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Given a table with this structure:
create table person (
id int identity primary key,
name varchar(256),
birthday datetime
)
For this query:
select id,name,birthday,haircolor from person
What is the rational behind SQL throwing an error in this situation? I was considering how flexible it would be if queries would simply return null for non existent columns. What would be concrete reasons for or against such a SQL language design?
No, because then you'd assume there is a haircolour column which could then have the following implications, just as an example:
You'd think you could insert into the field since you would assume that the column exists,
The returned result would not be an accurate representation of the database schema.
Errors are given so that you can have a clear understanding and indication of what you can and
cannot do. It's also there so you can catch exceptions, bugs, spiders, and of course creepy crawlies as soon as possible. :)
We dont wan't to accomdate lazy developers.
Do the right thing, be a man - Russel Peters.
This would be inconsistent for many reasons. One simple example is use of * to select the column: when you do this
select id, name, birthday, hair_color from person
and get back a table with all four columns present (hair_color is set to null on all rows) you would reasonably expect that the query below
select * from person
returned at least four columns, and that hair_color is among the columns returned. However, this wouldn't be the case if SQL allowed non-existent columns to return nulls.
This would also create hard-to-find errors when a column gets renamed in the schema, but some of the queries or stored procedures do not get re-worked to match the new names.
Generally, though, SQL engine developers make tradeoffs between usability and "hard" error checking. Sometimes, they would relax a constraint or two, to accommodate some of the most common "simple" mistakes (such as forgetting to use an aggregate function in a grouped query; it is an error, but MySql lets you get away with it). When it comes to schema checks, though, SQL engines do not allow any complacency, treating all missing schema elements as errors.
MySQL has one horrible bug in it...
select field1,field2,filed3
from table
group by field1
Any database engine would return 'error, wtf do I do with the field2 and field3 in the select line when they are not an aggregate nor in the group by statement'.
MySQL on the other hand will return two random values for field 2 and 3, and not return an error (which I refer to as 'doing the wrong thing and not returning an error'). THis is a horrid bug, the number of scripts of troubleshooted to discover theat MySQL is not handling group by's correctly is absurd...give an error before giving unintentional results and this huge bug won't be such an issue
Doesn't it seem to you that you are requesting more of this stupid behaviour to be propagated...just in the select clause instead of the group by clause?
edit:
typo propagation as well.
select age,gender, haricolour from...
I'd prefer to get an error back saying 'great typo silly' instead of a misnamed field full of nulls.
This would result in a silent errors problem. The whole reason you have compile time errors and runtime exceptions is to catch bugs as soon as possible.
If you had a typo in a column name and it returned null instead of an error, your application will continue to work, but various things would misbehave as a result. You might have a column that saves a user setting indicating they don't want to receive emails. However your query has a typo and so it always returns null. Gradually a few people begin to report that they are setting the "Do not send emails" setting but are still getting emails. You have to hunt through all your code to figure out the cause. First you look at the form that edits this setting, then at the code that calls the database to save the setting, then you verify the data exists and the setting is getting persisted, then you look at the system that sends emails, and work your way up to the DB layer there that retrieves settings and painstakingly look through the SQL for that typo.
How much easier would that process be if it had just thrown an error in the first place? No users frustrated. No wasting time with support requests. No wasting time troubleshooting.
Returning null would be too generic. I like it this way much better. In your flow you can probably catch errors and return null if you want. But giving more info about why the query failed is better than receiving a null (and probably have you scratch your head on your JOIN and WHERE clauses

Counting occurence of each distinct element in a table

I am writing a log viewer app in ASP.NET / C#. There is a report window, where it will be possible to check some information about the whole database. One kind of information there I want to display on the screen is the number of times each generator (an entity in my domain, not Firebirds sequence) appears in the table. How do I do that using COUNT ?
Do I have to :
Gather the key for each different generator
Run one query for each generator key using count
Display it somehow
Is there any way that I can do it without having to do two queries to the database? The database size can be HUGE, and having to query it "X" times where "X" is the number of generators would just suck.
I am using a Firebird database, is there any way to fetch this information from any metadata schema or there is no such thing available?
Basically, what I want is to count each occurrence of each generator in the table. Result would be something like : GENERATOR A:10 times,GENERATOR B:7 Times,Generator C:0 Times and so on.
If I understand your question correctly, it is a simple matter of using the GROUP BY clause, e.g.:
select
key,
count(*)
from generators
group by key;
Something like the query below should be sufficient (depending on your exact structure and requirements)
SELECT KEY, COUNT(*)
FROM YOUR_TABLE
GROUP BY KEY
I solved my problem using this simple Query:
SELECT GENERATOR_,count(*)
FROM EVENTSGENERAL GROUP BY GENERATOR_;
Thanks for those who helped me.
It took me 8 hours to come back and post the answer,because of the StackOverflow limitation to answer my own questions based in my reputation.

MSAccess SQL Injection

Situation:
I'm doing some penetration testing for a friend of mine and have total clearance to go postal on a demo environment. Reason for this is because I saw a XSS-hole in his online ASP-application (error page with error as param allowing html).
He has a Access DB and because of his lack of input-validation I came upon another hole: he allows sql injection in a where-clause.
I tried some stuff from:
http://www.krazl.com/blog/?p=3
But this gave limited result:
MSysRelationships is open, but his Objects table is shielded.
' UNION SELECT 1,1,1,1,1,1,1,1,1,1 FROM MSysRelationships WHERE '1' = '1 <-- worked so I know the parent table has at least 9 columns. I don't know how I can exploit the relation table to get tablenames ( I can't find any structures explanation so I don't know on what to select.
Tried brute-forceing some tablenames, but to no avail.
I do not want to trash his DB, but I do want to point out the serious flaw with some backing.
Anyone has Ideas?
Usually there are two ways to proceed from here. You could try to guess table names by the type of data which is stored in them which often works ("users" usually stores the user data ...). The other method would be to generate speaking error messages in the application to see if you can fetch table or column names from there.