Here is my test scenario.
When a command line executes - lot of values are written to different tables in a db.
There are multiple command line options and many values/tables to be verified in the db. How should I go about designing the check for values in db?
What I done have so far -
execute the command
Connect to db
run the query particular to the command (this will be very specific to the command on what tables I need to look in)
from the dataset returned check if dt[row][col]="value i expect".
Advantage here is that I have to write less code as part of framework.
Downside of this is that I have to write more code when developing each tests, not streamlined and I may get column names wrong sometimes.
So I am trying to see if I streamline the check better. Something like declare a class (for a table)with columns as exposed properties. This way i wont get the column names wrong at least.
Is this the simplest approach ( for the long run? I want to write more reusable code). If not, what best way there is?
Is there a easy way to export tables/column values from db to a c# project file (as a class/code) so that I don't have to put everything in code again.
Let me know if any details are not clear or if I would like me to elaborate bit more on the details.
Thanks for looking.
I don't know of a standard approach for this problem, but I'll offer some ideas.
I usually find myself creating classes to represent tables to take advantage of compile-time checks, so I think that's a good way to go. You might want to look into Linq-to-SQL -- I think it can do a lot of this for you. I sometimes use ActiveRecord in Ruby for this purpose, even on C# projects, because development with it is very quick.
Alternatively, you might want to put the tests in text files:
Command:
command to execute
Expected Data:
SELECT column FROM table;
row name, column name, expected data
row name, column name, expected data
row name, column name, expected data
Expected Data:
SELECT column FROM table;
row name, column name, expected data
row name, column name, expected data
Then write a little code to load and parse your files, run the command, and compare the results. This would factor out only the things that change with each test, so I don't know if it can get much better.
Another idea is to try pulling common code into a base class and keep the varying code in subclasses. This design follows the Template Method pattern.
Example
class MyTest : CommandLineTest {
public String Command() { return "Command to execute"; }
public String DataRetrievalCommand() { return "SELECT column FROM table"; }
public DataResult[] ExpectedData() {
return [ new DataResult("column", "row", "value"), ...];
}
}
The superclass will use these methods to get the commands and values, but it will be the one doing all the actual work. Similar idea as the text files, but the test specification is kept in code.
Hope this helps or at least gets a few ideas flowing.
Related
The situation:
When I try to make a function or stored-proc, I usually start with a plain query window with sql-code. Often, I use #tblvar local tables to hold subsets of data needed later in the script.
During my testing of the developing script, I "select" the contents of the #tblvar-tables to observe the data to make sure it is correct for the scenario being tested.
Then, when I have debugged the complex query I then place that working code into a new stored-proc or user-defined-function.
But first, I need to "remove" or "comment-out" those "select #tblvar-tables" sentences.
I do this using the following sample/example code:
--DEBUG_SELECT
SELECT '#tblvarCostsAll_1' AS 'QueryName', * FROM #tblvarCostsAll WHERE (UID_VEHICLE IN (1628,1638,1672)) ORDER BY DATE_RANGE_CODE, UID_VGROUP, UID_VEHICLE;
--DEBUG_RETURN RETURN;
It becomes simple for me to "search/find" the phrase "--DEBUG_" and adjust the commenting process by joining the separate --DEBUG_SELECT-line with the adjacent SELECT-line.
The Question...
Is there a best practice in how to develop good SQL code from queries to udf-functions and usp-stored-procs?
Thanks...John
I'm using selenium to automate interaction with a table on a webpage.
The table has a few columns and I'm sorting the data in the table by clicking on the sortable-headers (column names)
I've used a switch-case statement such as
switch(columnName){
case: "firstname":
columnHeader=driver.findElement(By.xpath());
case: "lastname":
columnHeader=driver.findElement(By.xpath());
...
...
and so on.
}
What could be a better alternative to using switch-case.
Also I used this since I didn't want to write a separate method for each column.
My suggestion is to go ahead and write a method to sort the table for each column. This prevents a few problems... 1) consumers don't have to look at the page to remember what all columns are available to sort by, 2) consumers don't have to look at the code for the function you are writing to understand what string they need to pass to sort by the desired column, 3) you don't have to worry about handing bad strings, 4) consumers don't have to worry about passing a bad string, and so on. Many of these problems won't be caught at compile time which means you will (or may) find them when the script is running. You want to give preference to finding as many bugs as possible at compile time. It will save you a LOT of time and prevent a LOT of bugs down the road.
Other benefits are that consumers look at the API and see methods like .sortTableByFirstName(), .sortTableByLastName(), etc. and it's obvious what each function does and it moves bugs into compile time because you can't call some method that doesn't exist because the name was typo'd.
First at all sorry for my English, this is not my native language. So.
I want to execute a SQL query in a script to get some data. I don't know if it's possible and if so, how to make it. To summarize :
The script add a button in M3 Smart Office (a ERP). I already done that.
When i select a row in a M3 function (like an article, or a client) i want to take and send his ID (and some other data) to a website.
They're is a lot of function in M3. In each function, they're are some field who contains a data. One of them contain the ID of the object (An article, a client,...). What i want to do, is to get this ID. The problem is that the field who contains the ID doesn't have the same name in all the function. So, i have two solutions :
Do a lot of if/elseif. Like "if it's such function, take such field". But if I (or somebody else) want to add a combination function/field later i (or somebody else ;) )need to do that in the script. It's not practical.
Create a sql table wich contain all the combination function/field. Then is the script, i do a sql query and i get all the data that the script need.
So here the situation. Maybe you have ideas to do that otherwise (without sql) and i take it !
Please see this in depth tutorial from the 4guysfromrolla site:
Server-Side JScript Objects
I am having some trouble generating my DAO/POJO code using Hibernate for a PostgreSQL database written using the CamelCase notation. Everything works fine until the code generation time: only my lowercase tables are generated!
If I have a table called Person, the Hibernate Configurations View will show it but without any attributes. Say I have another table, car, it will be shown with all of its attributes. On code generation time, furthermore, car will appear in the destination package, while the CamelCase tables won't, as it is completely ignored.
I found a way of overriding the default metadata generation class (JDBCMetaDataDialect), but it doesn't work. Even if it did work, I think my POJO/DAO objects would not work, because the PostgreSQLDialect dialect would handle the lowercase tables (and attributes?) in a wrong way.
How can I solve this issue? It looks like a bug, but I'm not sure of it.
I ended up always returning true from my generation method:
public boolean needQuote(String name) {
return true;
}
I am trying to figure out the best way to model a spreadsheet (from the database point of view), taking into account :
The spreadsheet can contain a variable number of rows.
The spreadsheet can contain a variable number of columns.
Each column can contain one single value, but its type is unknown (integer, date, string).
It has to be easy (and performant) to generate a CSV file containing the data.
I am thinking about something like :
class Cell(models.Model):
column = models.ForeignKey(Column)
row_number = models.IntegerField()
value = models.CharField(max_length=100)
class Column(models.Model):
spreadsheet = models.ForeignKey(Spreadsheet)
name = models.CharField(max_length=100)
type = models.CharField(max_length=100)
class Spreadsheet(models.Model):
name = models.CharField(max_length=100)
creation_date = models.DateField()
Can you think about a better way to model a spreadsheet ? My approach allows to store the data as a String. I am worried about it being too slow to generate the CSV file.
from a relational viewpoint:
Spreadsheet <-->> Cell : RowId, ColumnId, ValueType, Contents
there is no requirement for row and column to be entities, but you can if you like
Databases aren't designed for this. But you can try a couple of different ways.
The naiive way to do it is to do a version of One Table To Rule Them All. That is, create a giant generic table, all types being (n)varchars, that has enough columns to cover any forseeable spreadsheet. Then, you'll need a second table to store metadata about the first, such as what Column1's spreadsheet column name is, what type it stores (so you can cast in and out), etc. Then you'll need triggers to run against inserts that check the data coming in and the metadata to make sure the data isn't corrupt, etc etc etc. As you can see, this way is a complete and utter cluster. I'd run screaming from it.
The second option is to store your data as XML. Most modern databases have XML data types and some support for xpath within queries. You can also use XSDs to provide some kind of data validation, and xslts to transform that data into CSVs. I'm currently doing something similar with configuration files, and its working out okay so far. No word on performance issues yet, but I'm trusting Knuth on that one.
The first option is probably much easier to search and faster to retrieve data from, but the second is probably more stable and definitely easier to program against.
It's times like this I wish Celko had a SO account.
You may want to study EAV (Entity-attribute-value) data models, as they are trying to solve a similar problem.
Entity-Attribute-Value - Wikipedia
The best solution greatly depends of the way the database will be used. Try to find a couple of top use cases you expect and then decide the design. For example if there is no use case to get the value of a certain cell from database (the data is always loaded at row level, or even in group of rows) then is no need to have a 'cell' stored as such.
That is a good question that calls for many answers, depending how you approach it, I'd love to share an opinion with you.
This topic is one the various we searched about at Zenkit, we even wrote an article about, we'd love your opinion on it: https://zenkit.com/en/blog/spreadsheets-vs-databases/