I'm facing an issue regarding Notifications on my portal (Liferay 6.2).
When I had the idea to clean old (& useless) notifications from the DB table USERNOTIFICATIONEVENT my notification portlet crashes.
Every time I open the notifications I get the following error:
Caused by: com.liferay.portal.NoSuchUserNotificationEventException: No UserNotificationEvent exists with the primary key 115765
Although my table is empty, and I login in with a user the notifications show to be 20 (for example) and when I click on them I get the error. Creating a new notification with java code, the table updates and inserts the new notifications, so after that the notifications show to be 21.
How is that possible to see 21 notifications when in USERNOTIFICATIONEVENT exists only 1 record?
How is it possible? It's because you manipulated the database without fully understanding it, a common recipe for disaster. Check where liferay site will store in which table details will fetch? for an argument to not bother. If you do anything on the database, do it through the API, never through database manipulation. Also check the link contained in that answer.
There are typically additional data structures, metadata for example for permission checks or the full text index that you'd need to update as well. And that's not a complete list
Restore your backup is the safest way to recover, because even if you get it to work now otherwise, the upgrade routines to the next version might find unexpected data. And then it's too late
Related
Is there a way in native SQL, SQL database specific (i.e. PostGresQL) or another (NoSQL database) to subscribe to query and receive updates when a entry matches the criteria? For example I have the query: SELECT * FROM users WHERE birthday = today() is it possible to receive update when a entry matches the criteria instead of using the so called 'pulling' mechanism? The query can be slightly more complex because this idea is required for a solution which send recurring messages based on the user preferences.
The only database I know that has built-in notifications like this is RebirthDB with a feature called "changefeeds":
They allow clients to receive changes on a table, a single document,
or even the results from a specific query as they happen. Nearly any
ReQL query can be turned into a changefeed.
The only problem is that the database began life as RethinkDB, but the company making it folded in 2016, leaving it to the open-source community. It's still alive as "RebirthDB" on GitHub with active development, but the documentation is just a copy of the old Rethink docs with GitHub notices. They have a website url, but no website. I hope they can keep it alive: it's a great idea.
https://github.com/RebirthDB/docs
I am testing a functionality of module which has add and delete user functionality, I recorded a script to delete data but how do I cross verify that deletion has takes place , that particular user gets deleted, one is by verify the success message but do we verify other way like that user is not present so it means it gets deleted successfully
It depends on what object it is you are deleting.
In your example you said that you delete a user. You could try to log in with the deleted user, and expect (verify) that the application responds with some error message.
Can you please add more information on what exactly you are trying to acheive ?
If you are willing to test the non existance of an DOM object after deletion, there is assertElementNotPresent
If you willing to test non existence of data, as saied above, after deleting, query again the data to ensure that the data does not exists anymore.
If this not answers your question, please provide more info on your need
So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.
After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.
I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.
Part of the setup routine for the product I'm working on installs a database update utility. The utility checks the current version of the users database and (if necessary) executes a series of SQL statements that upgrade the database to the current version.
Two key features of this routine:
Once initiated, it runs without user interaction
SQL operations preserve the integrity of the users data
The goal is to keep the setup/database routine as simple as possible for the end user (the target audience is non-technical). However, I find that in some cases, these two features are at odds. For example, I want to add a unique index to one of my tables - yet it's possible that existing data already breaks this rule. I could:
Silently choose what's "right" for the user and discard (or archive) data; or
Ask the user to understand what a unique index is and get them to choose what data goes where
Neither option sounds appealing to me. I could compromise and not create a unique index at all, but that would suck. I wonder what others do in this situation?
Check out SQL Packager from Red-Gate. I have not personally used it, but these guys make good tools overall and this seems to do what you're looking for. It let's you modify the script to customize the install:
http://www.red-gate.com/products/SQL_Packager/index.htm
You never throw a users data out. One possible option is to try and create the unique index. If the index creation fails, let them know it failed, tell them what they need to research, and provide them a script they can run if they find they have a data error that they choose to fix up.