I make query through the PHQL to delete records. How can I get the number of deleted records?
In older versions, I used the construction
$this->modelsManager->getReadConnection('ModelName')->affectedRows();
But in new version this does not work.
From the Main Api Docs, You can get affected rows by this way:
$connection->execute("DELETE FROM robots");
echo $connection->affectedRows(), ' were deleted';
Try it & get your affected rows.
Good Luck
Related
I coding a application that dealing with files. So, I have a table that contains information about all the files that registered in the application.
My "files" table looks like this: ID, Path and LastScanTime.
The algorithm that I use in my application is simple:
Take the oldest row (LastScanTime is the oldest)
Extract the file path
Do some magics on this file (takes exactly 5 minutes)
Update the LastScanTime to the current time (now)
Go to step "1"
Until now, the task is pretty simple. For this, I going to use this SQL statement for getting the oldest item:
SELECT TOP 1 * FROM files ORDER BY [LastScanTime] ASC
and at the end of the item processing (preventing the item to be selected immediately again):
UPDATE Files SET [LastScanTime]=GETDATE() WHERE Id=#ItemID
Now, I going to add some complexity to the algorithm:
Take the 3 oldest row (LastScanTime is the oldest)
For each row, do:
A. Extract the file path
B. Do some magics on this file (takes exactly 5 minutes)
C. Update the LastScanTime to the current time (now)
D. Go to step "1"
The problem that now I facing with is that the whole process is going to be processed in parallel (no more serial processing). So, changing my SQL statement to the next statement is not enough!
SELECT TOP 3 * FROM files ORDER BY [LastScanTime] ASC
Why this SQL statement isn't enough?
Let's say that I run my code and started to execute the first 3 items. Now, after a minute I want to execute another 3 items. This SQL statement will retrieve exactly the same "oldest" items that we already started to process.
Possible solution
Implementing a SELECT & UPDATE (combined) that getting the 3 oldest item and immediately update their last scan time. Since there no SELECT & UPDATE in same statement, what will happens if between the executing of the first SELECT, will come in another SELECT? The both statements will get the same results. This is a problem... Another problem is that we mark the item as "scanned recently", before the scan is really finished. What happend if the scanned will terminated by an error?
I'm looking for tips and tricks to solve this problem. The solutions can add columns as needed.
I'll appreciate you help.
Well I usually have habit of having two different field name in the database. one is AddedDate and another is ModifiedDate.
So the algorithm in your terms will be:-
Take the oldest row (AddedDate is the oldest)
Extract the file path
Do some process on this file
Update the ModifiedDate to the current time (now)
It seems that you are going to invent event queue with your SQL. Possibly standard approaches like RabbitMQ or ActiveMQ may solve your problem.
Due to recent updates to the recent database, I have run into a weird problem. I have two tables, tVehicleDeal table and tVehicleLog table. We did a 'migration' meaning we created an app that will transfer the data from a old database to a more relational database. This process took awhile, but it finished and everything seemed good to go. What happens now, is that anytime tVehicleDeal is updated, the corresponding information is inserted into tVehicleLog. The problem that has occurred is.. I ran a script that would update the current deal in tVehicleDeal to the most recent log in tVehicleLog. I made an error in my script, and not all the current deals in tVehicleDeal were updated properly. As a result, when the users updated the active deal in tVehicleDeal, not all the information was inserted into the tVehicleLog. I need to find a way to update the newest entry with some fields from the past entries such as the date it was titled. Some Deals have as many as 20 different logs for it whereas some may have only 2 or 3. I have found this link here but I'm not 100 percent positive this is what I'm looking for. I have tried something similar to this but I am unable to get anything to work using the examples found on that page. Any other ideas will help greatly!
EDIT:
What I am unable to figure out is how to update a column in tVehicleLog. For example:
In the tVehicleLog table there are 6 results for a particular DealID.
The first through 4 do not have a titled date in it, but the 5th row does have a titled date.
I can't figure out how to update the titled column the 6th row for that dealID based on the 5th row that does have the titled date.
The link provided above looked like it was something I was looking for but I was unable to get that solution to work.
Based on this line from your question,
I can't figure out how to update the titled column the 6th row for
that dealID based on the 5th row that does have the titled date.
It seems like this should fix your problem. It is written only to solve this specific scenario. If other scenarios exist that are not exactly like this one, adjustments may have to be made. If I didn't understand your problem, please post further clarification.
UPDATE L1
SET TitleDate=L2.TitleDate
FROM tVehicleLog L1
INNER JOIN tVehicleLog L2
ON L1.DealID=L2.DealID
AND L2.TitleDate IS NOT NULL
WHERE L1.<PrimaryKeyColumn>=#ThePrimaryKeyColumnOfTheRowYouWantToUpdate
I am writing a query that basically updates all date fields for EstimatedDepartureDate to a specified date. I am using a view I created to pull a list of accounts to update and running this against my tnpEmployee table which has all employee records. When running this query however, it always has an output similar to:
(1 row(s) affected)
(55 row(s) affected)
As in, it's affecting one row first and then the rest. And when I go to run the SQL in that view I created to see the list of persons of interest, it shows one record less with every run of the query, as well as one number less in that bottom (55 row(s) affected), so the next run would show 54.
Here is the query I'm running:
UPDATE tnpEmployee
SET tnpEmployee.EstimatedDepartureDate='2014-04-01 13:37:43.000'
FROM tnpEmployee
INNER JOIN vnpGetActiveAccountsAgainstToBeDisabled
ON tnpEmployee.EmployeeID=vnpGetActiveAccountsAgainstToBeDisabled.EmployeeID
WHERE tnpEmployee.Email=vnpGetActiveAccountsAgainstToBeDisabled.Email
Any help would be greatly appreciated, it's killing me! All that needs to happen is the rows get that date field set and that's it, they should all populate in the very basic view I created, not one less on every run. Also, when I run that view and get one record less, ALL the records have been updated with the new date..
You could try to rewrite your SQL statement:
UPDATE tnpEmployee
SET tnpEmployee.EstimatedDepartureDate='2014-04-01 13:37:43.000'
WHERE tnpEmployee.EmployeeID IN (
SELECT vnpGetActiveAccountsAgainstToBeDisabled.EmployeeID
FROM vnpGetActiveAccountsAgainstToBeDisabled
WHERE tnpEmployee.Email=vnpGetActiveAccountsAgainstToBeDisabled.Email
);
As the experts already suggested, your result looks like your query invokes a trigger.
I need help with a query that does the following:
Start from the newest record and go downwards to the older records.
It needs to be ordered by created_at time.
If there are new records in the database by created_at time, retrive them but do not get records I already got from step 1.
I want to only get only 16 records at a time. That number can change later.
Do not retrive records I already sent from a previous time.
Also just to let you know, this is started via $.ajax.
Reason for this is because I am getting new + old records real-time to be sent to the client. Think something like like user starts off visiting the website and it gets the current records starting with new ones. Then the user can go get older records, but at the same request, it also retrieves the brand new records. With a twist of only 16 records at a time.
Do I make sense?
This is what I currently have for code:
RssEntry.includes(:RssItem).where("rss_items.rss_id in (?) AND (rss_entries.id < ? OR rss_entries.created_at > ?)", rssids, lid, time).order("rss_entries.id DESC").limit(16)
lid = last lowest id from those records
rssids = ids from where to get the records
time = last time it did the records call
That code above is only the beginning. I now need help to make sure it fits my requirements above.
UPDATE 1
Ok, so I managed to do what I wanted but in 2 sql queries. I really don't think it is possible to do what I want in one sql query.
Any help is greatly appreciated.
Thanks.
Firstly, use scopes to get what you want:
class RssEntry < ActiveRecord::Base
scope :with_items, includes(:RssItem)
scope :newer_first, order("rss_entries.created_at DESC")
def self.in(args)
where(id: args)
end
def self.since(time)
where('rss_entries.created_at > ?', time)
end
end
then
RssEntry.with_items.in(rssids).since(time).offset(lid).limit(16).newer_first
It should work as expected.
Hello I need a SQL query statement that gets me rows 'start' to 'finish'.
For example:
A website with many items where page 1 selects only items 1-10, page 2 has 11-20 and so on.
I know how to do this with Microsoft SQL Server and MySQL but I need an implementation that is platform independent. :/
I have an Increment line for IDs but deleting in-between will mess the result when I select via
WHERE ID > number AND ID < othernumber
of course
Is this possible without fetching the whole database to a ResultSet?
I think your safest bet would be to use the BETWEEN operator. I believe it works across Oracle/MySQL/MSSQL.
WHERE ID BETWEEN number AND othernumber
Concerning your comment " I was just think for the case when first 100 IDs are gone I'll have to check further until there is something to fetch", you might wanna consider NOT actually ever deleting stuff from your database but to add a flag like "active" or something like that to your tables so you can avoid situations like the one you're now trying to avoid. The alternative is where you are now, having to find the max and min rows in a filter