In the context of my SAP Application I added a column to an existing table and would like to define a default value for it, so that old code working with the table (code that is inserting lines especially) doesn't have to care about the new column - rather I want it to be filled with a predefined default value automatically (only if no value is specified of course).
The DB-system that lies behind is an Oracle-DB, though I have only access to it through the SAP-GUI and the ABAP-SQL.
As our company expert for SAP did not know if this is possible I thought maybe someone here would. So - is this possible and if it is - how?
Edit - Requested Scenario details:
The scenario is actually very simple: We have a users-table for our application containing the standard user stuff (name, some setting, Ids, division, a bunch of flags and so on), and I added a column to store a simple setting (the design the user has chosen for his webinterface). It contains simply a name (char 40). That's the column I talked about above and I want the default value for it to be let's say "Default Design".
Please, don't even think about doing this on a database level. Seriously. Changes made to the database layer directly will not be visible inside the system and lead to all kinds of strange side effects that will be a nightmare to support. Besides, your changes won't be picked up by the Change and Transport System - you'd have to update the QA and Production systems manually.
If possible, I'd recommend to choose your domain values in a way that the neutral field value (spaces, zero, whatever) corresponds to your default value. If this is not possible, please describe your scenario in detail to get a more specific answer.
The SAP R/3 / ABAP environment does not give you the option of adding default values for a column. You can only choose to force the system to fill the non-NULL default values when adding a column, but this is usually a bad idea. It takes time to modify all the data and insert the default values, and depending on the table size and criticality, this can lead to a production outage. Filling the fields with default values has to be performed by the application server, not the database. In your case, I'd just add the logic in the read-access module, something like
IF my_user-ze_design IS INITIAL.
my_user-ze_design = co_ze_default_design.
ENDIF.
You can define default values for columns added to tables - and if your DB is Oracle 11g (or later), Oracle introduced "Dictionary Only Add Column", which means the default value metadata is stored only in the dictionary - so existing records do not need to be updated with the default value and there is no overhead, no matter how large the table.
Related
I have a table with guid identifier and one field that is a 5 characters string that can be specified by user, but it is optional, and it should be unique per user. I'm looking for a way to have this field always there, even if user doesn't specify it. The easiest approach is to have it like "00001", "00002"... etc. in case that user doesn't specify it, it is stored like this. I'm using SQL and entity framework core. What is the best way to achieve this?
EDIT: maybe trigger that will check after insert if that field is not specified and then just take current row number and convert it to string? does this make sense?
Cheers
Setting a default value to '00001' can be done define the field with:
NOT NULL DEFAULT right('0000' || to_char(SomeSequence.nextval),5) (pseudo-code to be adapted to the DBMS you are connected to).
Compared to the solution in your EDIT, this will at least guarantee that 2 inserts at the same time from 2 different users get assigned different values.
The real problem comes with the unique constraint on the column. This does not work nicely when mixing manual input with calculated values.
If as a user, I input (manually) 00005, then the insertion will fail when SomeSequence reaches 5.
I think this problem will exist regardless of how you implement the generation of values (sequence, trigger, external code, ...)
Even if you are fine with coding some additional (and probably complicated) logic to manage that, it will probably decrease concurrency.
I'um using Gnome Data Access (libgda) to access a database in a C program.
I use the GdaSqlBuilder to build my queries.
Here is an exemple code for adding an equal condition on a field for a request :
GdaSqlBuilderId add_equal_condition(char* m_name, GValue* m_value)
{
GdaSqlBuilderId name, value, condition;
name = gda_sql_builder_add_id(builder, m_name);
value = gda_sql_builder_add_expr_value(builder, NULL, m_value);
condition = gda_sql_builder_add_cond(builder, GDA_SQL_OPERATOR_TYPE_EQUAL, name, value, 0);
return condition;
}
Does libgda protect itself against SQL injections or do I need to sanitize the input myself before I pass it to GDA ?
Thanks in advance for your answers.
This is explained in the foreword:
When creating an SQL string which contains values (literals), one can
be tempted (as it is the easiest solution) to create a string
containing the values themselves, execute that statement and apply the
same process the next time the same statement needs to be executed
with different values. This approach has two major flaws outlined
below which is why Libgda recommends using variables in statements
(also known as parameters or place holders) and reusing the same
GdaStatement object when only the variable's values change.
https://developer.gnome.org/libgda/unstable/ch06s03.html
Even if the current version is not vulnerable, that does not mean that every future version will not be vulnerable. You should always, without any exception, take care of what a user provides.
Same goes for interfaces from other systems of any kind. This is not limited to SQLi and not a question of SQLi or the libraries you use. You are responsible that a user can only enter the kind data that you want him/her to enter or reject it otherwise. You can not rely on other code to do that for you.
Generally: Nothing can protect itself completly against a certain type of attack. It will always be limited to the attackvectors known at the time of writing.
Currently I have a database that stores boolean fields as VARCHAR(1) ('T' or 'F'). I want to replace these with BIT. The problem is that this would require a ton of changes in the program that uses the database. So I thought the logical step is to add a BIT field and replace the existing VARCHAR(1) field with a computed column that I access rather than accessing the BIT field (thus the program can continue to work as is without changes, and be changed to use the BIT field over time).
I know this won't work (UPDATE and INSERT doesn't work on computed columns). I know one option is to rename the existing table and add a view through which to access it, but I don't see that as a viable solution, as adding and removing columns, changing dependent views, etc. would be prone to errors (and it's not a neat solution in my opinion).
My question is - what are my options to achieve the above behaviour (such that the program can continue working as is)?
An example:
User (Active VARCHAR(1), ...)
Changed to use computed columns: (won't work)
User (Active_B BIT, Active AS CASE Active_B WHEN 1 THEN 'T' ELSE 'F' END, ...)
UPDATE: Fixed error in example.
It would have to be:
ALTER TABLE dbo.User
ADD Active AS CASE Active_B WHEN 1 THEN 'T' ELSE 'F' END PERSISTED
You need to use the column name (not the datatype) in the CASE. And I'd recommend making the computed column persisted, too - so that the value gets actually stored on disk (and not recomputed every time you access it).
An option is to have both a VARCHAR and a BIT field and use triggers to update between them.
I'll just have to figure out how to prevent infinite recursion (one idea is to have a field that serves no other purpose than to check if this trigger resulted from an update within another trigger (check if we're updating it and include it in the update in the trigger)). The updates need to go both ways to allow for easy backward compatibility.
Say that I needed to share a database with a partner. Obviously I have customer information in that database. Short of going through and identifying every column that contains privacy information and a custom script to 'scrub' the data, is there any tool or script which can scrub the data, but keep the format in tact (for example, if a string is 5 characters, it would stay 5 characters, only scrubbed)?
If not, how would you accomplish something like this, preferably in TSQL?
You may consider only share VIEW, create VIEWs to hide data that you don't want share.
Example:
CREATE VIEW v_customer
AS
SELECT
NAME,
LEFT(CreditCard,5) + '****' As CreditCard -- OR, don't show this column at all
....
FROM customer
Firstly I need to state professional interest I work for IBM which has tools that do exactly this.
Step 1. Ensure you identify all the PII (Personally Identifiable Information). When sharing database information it is typical that the obvious column names like "name" are found but you also need to find the "hidden" data where either the data is embedded in a standard format eg string-name-string and column name is something like "reference code" or is in free format text fields . as you have seen this is not going to be an easy job unless you automate it. The Tool for this is InfoSphere Discovery
Step 2. What context does the "scrubbed" data need to be in. Changing named fields to random characters has problems when testing as users focus on text errors rather than functional failures, therefore change names to real but ficticious. Credit card information often needs to be "valid". by that I mean it needs to have a valid prefix say 49XX but the rest an invalid sequence. Finally you need to ensure that every instance of the change is propogated through the database to maintain consistency. Tool for this is Optim Test Data Management with Data Privacy option.
The two tools integrate to give a full data privacy solution.
Based on the original question, it seems you need the fields to be the same length, but not in a "valid" format? How about:
UPDATE customers
SET email = REPLICATE('z', LEN(email))
-- additional fields as needed
Copy/paste and rename tables/fields as appropriate. I think you're going to have a hard time finding a tool that's less work, unless your schema is very complicated, or my formatting assumptions are incorrect.
I don't have an MSSQL database in front of me right now, but you can also find all of the string-like columns by something like:
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE IN ('...', '...')
I don't remember the exact values you need to compare for, but if you run the query and see what's there, they should be pretty self-explanatory.
I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like:
(CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID)
THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added?
as well as JOINs etc. I like to have the result as a single table with all included.
Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this?
Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle.
The 2 following questions make sense only if answer is "yes".
Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them.
Question 2. Am I right? Is it some sort of optimizing?
Also, the SELECT contains constant strings, but they are localizable. For instance,
WHERE R.Status = "Enabled"
"Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script.
Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly?
Regards,
Well, the SQL-CLR trigger will also execute on the server, inside the server process - so that's server-side as well, no benefit there.
But I agree - triggers ought to be written in T-SQL whenever possible - no real big benefit in having triggers in C#.... can you show the the whole trigger code?? Unless it contains really odd balls stuff, it should be pretty easy to convert to T-SQL.
I don't see how you could "move" the SELECT to the SQL side and keep the rest of the code in C# - either your trigger is in T-SQL (my preference), or then it is in C#/SQL-CLR - I don't think there's any way to "mix and match".
To start with, you probably do not need to do that type of subquery inside of whatever query you are doing. The INSERTED table only has rows that have been updated (or inserted but we can assume this is an UPDATE Trigger based on the comment in your code). So you can either INNER JOIN and you will only match rows in the Table with the alias of "R" or you can LEFT JOIN and you can tell which rows in R have been updated as the ones showing NULL for all columns were not updated.
Question 1) As marc_s said below, the Trigger executes in the context of the database. But it goes beyond that. ALL database related code, including SQLCLR executes in the database. There is no client-side here. This is the issue that most people have with SQLCLR: it runs inside of the SQL Server context. And regarding wanting to call a Stored Proc from the Trigger: it can be done BUT the INSERTED and DELETED tables only exist within the context of the Trigger itself.
Question 2) It appears that this question should have started with the words "Also, the SELECT". There are two things to consider here. First, when testing for "Status" values (or any Lookup values) since this is not displayed to the user you should be using numeric values. A "status" of "Enabled" should be something like "1" so that the language is not relevant. A side benefit is that not only will storing Status values as numbers take up a lot less space, but they also compare much faster. Second is that any text that is to be displayed to the user that needs to be sensitive to language differences should be in a table so that you can pass in a LanguageId or LocaleId to get the appropriate French, German, etc. strings to display. You can set the LocaleId of the user or system in general in another table.
Question 3) If by "registration" you mean that the Assembly is either CREATED or DROPPED, then you can trap those events via DDL Triggers. You can look here for some basics:
http://msdn.microsoft.com/en-us/library/ms175941(v=SQL.90).aspx
But CREATE ASSEMBLY and DROP ASSEMBLY are events that are trappable.
If you are speaking of when Assemblies are loaded and unloaded from memory, then I do not know of a way to trap that.
Question 1.
http://www.sqlteam.com/article/stored-procedures-returning-data
Question 3.
It looks like there are no appropriate attributes, at least in Microsoft.SqlServer.Server Namespace.