i have string of value like "4,3,8"
and i had comma separated column in table as below.
ID | PrdID | cntrlIDs
1 | 1 | 4,8
2 | 2 | 3
3 | 3 | 3,4
4 | 4 | 5,6
5 | 5 | 10,14,18
i want only those records from above table which match in above mention string
eg.
1,2,3 this records will need in output because its match with the passing string of "4,3,8"
Note : i need this in entity framework LINQ Query.
string[] arrSearchFilter = "4,3,8".Split(',');
var query = (from prdtbl in ProductTables
where prdtbl.cntrlIDs.Split(',').Any(x=> arrSearchFilter.Contains(x))
but its not working and i got below error
LINQ to Entities does not recognize the method 'System.String[] Split(Char[])' method, and this method cannot be translated into a store expression.
LINQ to Entities tries to convert query expressions to SQL. String.Split is not one of the supported methods. See http://msdn.microsoft.com/en-us/library/vstudio/bb738534(v=vs.100).aspx
Assuming you are unable to redesign the database structure, you have to bypass the SQL filter and obtain ALL records and then apply the filter. You can do this by using ProductTables.ToList() and then using this in second query with the string split, e.g.
string[] arrSearchFilter = "4,3,8".Split(',');
var products = ProductTables.ToList();
var query = (from prdtbl in products
where prdtbl.cntrlIDs.Split(',').Any(x=> arrSearchFilter.Contains(x))
This is not a good idea if the Product table is large, as you are losing a key benefit of SQL and loading ALL the data before filtering.
Redesign
If that is a problem and you can change the database structure, you should create a child table that replaces the comma-separated values with a proper normalised structure. Comma separated variables might look like a convenient shortcut but they are not a good design and as you have found, are not easy to work with in SQL.
SQL
If the design cannot be changed and the table is large, then your only other option is to hand-roll the SQL and execute this directly, but this would lose some of the benefits of having Linq.
Related
How would I go about joining results from multiple SQL queries so that they are side by side (but unrelated)?
The reason I am thinking of this is so that I can run 1 query in Google Big Query and it will return 1 single table which I can import into Excel and do some charts.
e.g. Query 1 looks at dataset TableA and returns:
**Metric:** Sales
**Value:** 3,402
And then Query 2 looks at dataset TableB and returns:
**Name:** John
**DOB:** 13 March
They would both use different tables and different filters, etc.
What would I do to make it look like:
---Sales----------John----
---3,402-------13 March----
Or alternatively:
-----Sales--------3,402-----
-----John-------13 March----
Or is there a totally different way to do this?
I can see the use case for the above, I've used something similar to create a single table from multiple tables with different metrics to query in Data Studio so that filters apply to all data in the dataset for example. However in that case, the data did share some dimensions that made it worthwhile doing.
If you are going to put those together with no relationship between the tables, I'd have 4 columns with TYPE describing the data in that row to make for easier filtering.
Type | Sales | Name | DOB
Use UNION ALL to put the rows together so you have something like
"Sales" | 3402 | null | null
"Customer Details" | null | John | 13 March
However, like the others said, make sure you have a good reason to do that otherwise you're just creating a bigger table to query for no reason.
My client has a set of numeric data stored in a string field in a database. So of course it doesn't sort correctly. These rows sort like this:
105
3
44
When they should sort like this:
3
44
105
This is very much a legacy database and I can't change it at all. I also can't change the software that uses the database. The client doesn't own it or have the source code. It has never worked the way they want. However, there is an unused string field that I could use to sort on (only a small number of fields can be sorted on.)
What I would like to do is take the input data, derive a string from it, and store the new string in the unused field, such that when the data is sorted on this new data, the original data sorts correctly, i.e., numerically.
So, for an overly simplistic example, if the algorithm produced the following new data:
105 -> c
3 -> a
44 -> b
Then when the second column was sorted, the first column would look 'correct'.
The tricky bit is that when new rows are added to the database, they must also sort correctly, without having to regenerate the sort data for all rows. This is the part of the problem that has my brain in a twist. I'm not sure it's actually possible.
You can assume that the number will never be more than 5 'digits'.
I realize this is a total kludge, but since I can't change the system, I have to find a work around, rather than a quality solution. Welcome to the real world.
~~~~~~~~~~~~~~~~~~~~~~ S O L U T I O N ~~~~~~~~~~~~~~~~~~
I don't think this is an uncommon problem, so here are the results of Gordon's solution:
mysql> select * from t order by new;
+------+------------+
| orig | new |
+------+------------+
| 3 | 0000000003 |
| 44 | 0000000044 |
| 105 | 0000000105 |
+------+------------+
In most databases, you can just do:
order by cast(col as int)
This will convert the string representation to a number and use that for ordering. There is no need for an additional column. If you add one, I would recommend adding a numeric column to contain the actual value.
If you really want to store something in the unused field, then you can left pad the number. How to do this depends on the database, but here is one typical method:
update t
set unused = right(concat('0000000000', col), 10);
Not all databases support these two specific functions, but all offer this basic functionality in some method.
Try something like
SELECT column1 FROM table1 ORDER BY LENGTH(column1) ASC, column1 ASC
(Adjust the column and table name for your environment.)
This is a bit of a hack but works as long as the "numbers" in your string column are natural, non-negative numbers only.
If you are looking for a more sophisticated approach or algorithm, try searching for natural sort together with your DBMS.
Say you want to record three numbers for every Movie record...let's say, :release_year, :box_office, and :budget.
Conventionally, using Rails, you would just add those three attributes to the Movie model and just call #movie.release_year, #movie.box_office, and #movie.budget.
Would it save any database space or provide any other benefits to condense all three numbers into one umbrella column?
So when adding the three numbers, it would go something like:
def update
...
#movie.umbrella = params[:movie_release_year]
+ "," + params[:movie_box_office] + "," + params[:movie_budget]
end
So the final #movie.umbrella value would be along the lines of "2015,617293,748273".
And then in the controller, to access the three values, it would be something like
#umbrella_array = #movie.umbrella.strip.split(',').map(&:strip)
#release_year = #umbrella_array.first
#box_office = #umbrella_array.second
#budget = #umbrella_array.third
This way, it would be the same amount of data (actually a little more, with the extra commas) but stored only in one column. Would this be better in any way than three columns?
There is no benefit in squeezing such attributes in a single column. In fact, following that path will increase the complexity of your code and will limit your capabilities.
Here's some of the possible issues you'll face:
You will not be able to add indexes to increase the performance of lookup of records with a specific attribute value or sort the filtering
You will not be able to query a specific attribute value
You will not be able to sort by a specific column value
The values will be stored and represented as Strings, rather than Integers
... and I can continue. There are no advantages, only disadvantages.
Agree with comments above, as an example try to use pg_column_size() to compare results:
WITH test(data_txt,data_int,data_date) AS ( VALUES
('9999'::TEXT,9999::INTEGER,'2015-01-01'::DATE),
('99999999'::TEXT,99999999::INTEGER,'2015-02-02'::DATE),
('2015-02-02'::TEXT,99999999::INTEGER,'2015-02-02'::DATE)
)
SELECT pg_column_size(data_txt) AS txt_size,
pg_column_size(data_int) AS int_size,
pg_column_size(data_date) AS date_size
FROM test;
Result is :
txt_size | int_size | date_size
----------+----------+-----------
5 | 4 | 4
9 | 4 | 4
11 | 4 | 4
(3 rows)
I have a cell from a query, returning multiple readings as below with a maximum of 8
|_____readings_____|
|1;2;3;..., 8 |
In my SSRS report, I need each reading to be in a seperate column, e.g
| a | b | c | ...|
| 1 | 2 | 3 | ...|
I am using the 2005 version of ssrs and sql server
Could anyone help? Kind regards
Report-level
You can use the Split function to take a delimited string and return an array; based on this you can specify the element you want from 0-7 to get your eight columns.
In an expression you'd do something like this:
=Split(fields!readings.Value, ";")(0) (1st element) or
=Split(fields!readings.Value, ";")(7) (8th element)
The problem with this is when there is less than 8 elements in the readings field; you will get an error reported - wrapping the expression in an IIf is not enough as this doesn't short circuit in SSRS and any problem string will error regardless.
To deal with these issues you can move the logic to custom code embedded in the report:
Function ElementByNumber(fieldValue As String, elementNumber As Integer) As String
If Split(fieldValue, ";").Length < elementNumber
ElementByNumber = Nothing
Else
ElementByNumber = Split(fieldValue, ";")(elementNumber - 1)
End If
End Function
You can then reference this in the report like:
=Code.ElementByNumber(fields!readings.Value, 8)
Repeat as required for each column you need.
Database level
The other non-SSRS specific workaround would be to handle this, if possible, at the database level, and just use the unpivoted data as a base for a Matrix in the report.
Erland Sommarskog has a series of articles under Arrays and Lists that present any number of methods to split strings in SQL Server; this SO question has a bunch of other alternatives.
Obviously if you're dealing with a fixed Data Source/DataSet this might not be an option.
I'm working now for a while on a reporting applications where I use hibernate to define my queries. However, more and more I get the feeling that for reporting use cases this is not the best approach.
The queries only result partial columns, and thus not typed objects
(unless you cast all fields in java).
It is hard to express queries without going straight into sql or
hql.
My current problem is that I want to get the top N per group, for example the last 5 days per element in a group, where on each day I display the amount of visitors.
The result should look like:
| RowName | 1-1-2009 | 2-1-2009 | 3-1-2009 | 4-1-2009 | 5-1-2009
| SomeName| 1 | 42 | 34 | 32 | 35
What is the best approach to transform the data which is stored per day per row to an output like this? Is it time to fall back on regular sql and work with untyped data?
I really want to use typed objects for my results but java makes my life pretty hard for that. Any suggestions are welcome!
Using the Criteria API, you can do this:
Session session = ...;
Criteria criteria = session.createCriteria(MyClass.class);
criteria.setFirstResult(1);
criteria.setMaxResults(5);
... any other criteria ...
List topFive = criteria.list();
To do this in vanilla SQL (and to confirm that Hibernate is doing what you expect) check out this SO post: