I essentially have a database layer that is totally isolated from any business logic. This means that whenever I get ready to commit some business data to a database, I have to pass all of the business properties into the data method's parameter. For example:
Public Function Commit(foo as object) as Boolean
This works fine, but when I get into commits and updates that take dozens of parameters, it can be a lot of typing. Not to mention that two of my methods--update and create--take the same parameters since they essentially do the same thing. What I'm wondering is, what would be an optimal solution for passing these parameters so that I don't have to change the parameters in both methods every time something changes as well as reduce my typing :) I've thought of a few possible solutions. One would be to move all the sql parameters to the class level of the data class and then store them in some sort of array that I set in the business layer. Any help would be useful!
So essentially you want to pass in a List of Parameters?
Why not redo your Commit function and have it accept a List of Parameter objects?
If your on SQL 2008 you can use merge to replace insert / update juggling. Sometimes called upsert.
You could create a struct to hold the parameter values.
Thanks for the responses, but I think I've figured out a better way for what I'm doing. It's similar to using the upsert, but what I do is have one method called Commit that looks for the given primary key. If the record is found in the database, then I execute an update command. If not, I do an insert command. Since the parameters are the same, you don't have to worry about changing them any.
For your problem I guess Iterator design pattern is the best solution. Pass in an Interface implementation say ICommitableValues you can pass in a key pair enumeration value like this. Keys are the column names and values are the column commitable values. A property is even dedicated as to return the table name in which to insert these value and or store procedures etc.
To save typing you can use declarative programming syntax (Attributes) to declare the commitable properties and a main class in middleware can use reflection to extract the values of these commitable properties and prepare a ICommitableEnumeration implementation from it.
Related
I'm struggling with what seems like the simplest thing there is: assigning a value to a mapping variable that I later on use in my flow to make a decision upon... With my MS SSIS background this is a 10 seconds task, however in Informatica PowerCenter, it is taking me hours...
So I have a mapping variable $$V_FF and a workflow variable $$V_FF. At first the names were different but while trying things out, I changed that. But that shouldn't matter, right?
In a mapping, I have a view as a source that returns -1, 0 or 1. The mapping variable aggregate function is set to MIN.
In the session that I have created for this mapping, I have a post-session assignment between the wf variable and the mapping variable.
In this mapping I use setvariable function in an Extrans block.
Every time I run the wf, I see in the log that it uses a persistent value instead of assigning a new value everytime the flow is running...
What am I missing here?
Thanks in advance!
Well, the variables here work in a bit different way indeed. So it would be easier to come up with a good answer or you'd explain the whole scenario: what are you using the variable for?
Anyway, the variable values are persisted in repository and reused, as you've noticed. For your scenario you could add an Assignment Task to the Workflow before your Session. Set some low value (eg. -1 if you expect your variable to have some positive value after the Mapping run) and use PreSession Variable Assignment to pass the value to the Mapping. This will override the use of persisted repository value. If course in this case you will need to use Max aggregation.
In the end, I managed to accomplish what I wanted back then. There might be a better way, but this solution is easy to maintain and easy to understand.
Create a variable in your workflow, lets say $$FailureFlag with type integer.
Create a view in your DB that returns 1 row with a integer value between 0 and x, where x is a positive integer value.
Create a mapping with the view that we just created as the source and use a dummy table as destination.
In this mapping you also create a variable, lets say $$MYVAR, with type integer and aggregation "Count". In an "Expression Transformation" I assign the result of the view, column FF, to this variable $$MYVAR by using SETVARIABLE($$MYVAR,FF).
From this mapping, create a session in your workflow. In the Components tab, in the "Postsession_success_variable_mapping" section, add a row and link workflow variable $$FailureFlag with session variable $$MYVAR.
Add a Decision component right after the session you just created, and test the content of your workflow variable, for example $$V_FAILURE_FLAG_IMX = 1.
Connect your decision then with your destination and add a test clause for example:
"$MyDecision.Condition = true AND $MyDecision.PrevTaskStatus = succeeded"
Voila, that's it.
I am inheriting an application which has to read data from various types of files and use the OCI interface to move the data into an Oracle database. Most of the tables in question have about 40-50 columns, so the SQL insert statements become pretty lengthy.
When I inherited this code, it basically built up the insert statements via a series of strcats as a C string, then passed it to the appropriate OCI functions to set up and execute the statement. However, since much of the data is read directly from files into the column values, this leaves the application open to easy SQL injection. So I am trying to use bind variables to solve this problem.
In every example OCI application I can find, each variable is statically allocated and bound individually. This would lead to quite a bit of boilerplate, however and I'd like to reduce it to some sort of looping construct. So my solution is to, for each table, create a static array of strings containing the names of the table columns:
const char const *TABLE_NAME[N_COLS] = {
"COL_1",
"COL_2",
"COL_3",
...
"COL_N"
};
along with a short function that makes a placeholder out of a column name:
void makePlaceholder(char *buf, const char *col);
// "COLUMN_NAME" -> ":column_name"
So I then loop through each array and bind my values to each column, generating the placeholders as I go. One potential problem here is that, because the types of each column vary, I bind everything as SQLT_STR (strings) and thus expect Oracle to convert to the proper datatype on insertion.
So, my question(s) are:
What is the proper/idiomatic (if such a thing exists for SQL/OCI) to use bind variables for SQL insert statements with a large number of columns/params? More generally, what is the best way to use OCI to make this type of large insert statement?
Do large numbers of bind calls have a significant cost in efficiency compared to building and using vanilla C strings?
Is there any risk in binding all variables as strings and allowing Oracle to make the proper type conversion?
Thanks in advance!
Not sure about the C aspects of this. My answer will be from a DBA perspective.
Question 2:
Always use bind variables. It prevent SQL-injection and enhances performance.
The performance aspect is often overlooked by programmers. When Oracle receives a SQL it makes a hash of the entire SQL-text and looks in it's internal repository of execution plans to see if it has one. If bind variables was used it the SQL-text will be the same each time you run the query, not matter what the value of a variable is. However if you have concatenated the string your self Oracle will hash the SQL-text including content of (what you aught to have put in) variables, getting a unique hash every time. So if you do a query one million times Oracle will if you used bind variables make one execution plan, while if you did not use bind variables it will make one million execution plans and waste loads of resources doing that.
When you read in a result set in Groovy it comes in a collection of maps.
Seems like you should be able to update values inside those maps and write them back out, but I can't find anything built into groovy to allow me to do so.
I'm considering writing a routine that allows me to write a modified map by iterating over the fields of one of the result objects, taking each key/value pair and using them to create the appropriate update statement, but it could be annoying so I was wondering if anyone else had done this or if it'sa vailable already in groovy.
It seems like just a few lines of code so I'd rather not bring in hibernate for this. I'm just thinking a little "update" method that would allow:
def rows=sql.rows(query)
rows[0].name="newName"
update(sql, rows[0])
to update the first guy's name in the database. Anyone seen/created such a monster, or is something like this already built into Groovy Sql and I'm just missing it?
(I suppose you may have to point out to the update method which field is the key field, but that's doable...)
Using the rows method will actually read out all of the values into a List of GroovyRowResult so it's not really possible to update the data without creating an update method like the one you mention.
It's not really possible to do that in the generic case because your query can contain joins or a column reference that is an aggregate, etc.
If you're selecting from a single table use the Sql.eachRow method however and set the ResultSet to be an updatable one, you can use the underlying ResultSet interface to update as you iterate through:
sql.resultSetConcurrency = ResultSet.CONCUR_UPDATABLE
sql.resultSetType = ResultSet.TYPE_FORWARD_ONLY
sql.eachRow(query) { row ->
row.updateString('name', 'newName')
row.updateRow()
}
Depending on the database/driver you use, you may not be able to create an updatable ResultSet.
Is it possible to mock/spy functions with T-SQL? I couldn't find anything mentioning it. I was thinking of creating my own implementation using the SpyProcedure as a guideline (if no implementation exists). Anyone had any success with this?
Thanks.
In SQL Server functions cannot have side-effects. That means, in your test you can replace the inner function with on that returns a fixed result, but there is no way to record the parameters that were past into the function.
There is one exception: If the function returns a string and the string does not have to follow a specific format, you could concatenate the passed-in parameters and then assert later on that the value coming back out contained all the correct values, but that is a very special case and not generally possible.
To fake a function, just drop or rename the original and create your own within the test. I would put this code into a helper function, as it probably will be called from more than one test.
I have an ABAP class method, say, select_something. select_something has an exporting parameter, say, et_result. et_result is of type standard table because the type of et_result cannot be determined until runtime.
The method sometimes gives a short dump saying With ABAP/4 Open SQL array select, the output table is too small at "select * into table et_result from (lv_tablename) where..."
Error analysis:
......in this particular case, the database table is 3806 bytes wide, but the internal table is only 70 bytes wide.
I tried "any table" too and the error is the same.
You could return a data reference. Your query will no longer fail, and you can assign the data to a correctly typed field symbol afterwards.
" Definition
class-methods select_all
importing
!tabname type string
returning
value(results) type ref to data.
...
...
" Implementation
method select_all.
data dref type ref to data.
create data dref type standard table of (tabname).
field-symbols <tab> type any table.
assign dref->* to <tab>.
select * from (tabname) into table <tab>.
get reference of <tab> into results.
endmethod.
Also, I agree with #vwegert that dynamic queries (and programming for that matter) should be avoided when possible.
What you're trying to do looks horribly wrong on many levels. NEVER use SELECT FROM (whatever) unless someone points a gun at your head AND the door is locked tight. You'll loose every kind of static error checking the system might be able to provide you with. For example, the compiler will no longer be able to tell you "Hey, that table you're reading from is 3806 bytes wide." It simply can't tell, even if you use constants. You'll find that out the hard way, producing short dumps, especially when switching between unicode and NUC systems, quite likely some in production systems. No fun.
(Actually there are a few - very very VERY few - good uses for dynamic table names in the SELECT statement. I need them about once every two to three years, and I code quite a lot weird stuff. Just avoid them wherever you can, even at the cost of writing more code. It's just not worth the trouble fixing broken stuff later.)
Then, changing the generic formal parameter type does not do anything to the type of the actual parameter. If you pass a STANRDARD TABLE OF mandt WITH DEFAULT KEY to your method, that table will have lines of 3 characters. It will be a STANDARD TABLE, and as such, it will also be an ANY TABLE, and that's about it. You can twist the generic types anywhere you like, there's no way to enforce correctness using generic types the way you use them. It's up to the caller to make sure that all the right types are used. That's a bad way to fly.
First off, I agree with vwegert's response, try to avoid dynamic sql selections if you can
That said, check the short dump. If the error is an exception class, you can wrap the SELECT statement in a try/catch block and at least stop it from dumping.
You can also try "INTO CORRESPONDING FIELDS OF TABLE et_result". If ET_RESULT is dynamic, you might have to cast it into the proper structure using RTTS. This might give you some ideas...
Couldn't agree more to vwegert, but if there is absolutely no other way (and there usually is) of performing your task than using dynamic select statements and dynamically typed parameters, do some checks on the type of the table and the parameter at runtime.
Use CL_ABAP_TYPEDESCR and its subclasses to do so.
This way, you can handle errors at runtime without your program dumping,
But as vwegert said, this dynamic stuff is pure evil and will most certainly break at some point during runtime. Adding the necessary error handling will most likely be a lot more work and a lot harder than redesigning your code to none dynamic SQL and typed parameters