I ran a SQL query using the dataclips feature on Heroku. It works great - the results I get are in the correct order, because part of my clause specifies "ORDER BY ...".
I just pulled a copy of my database from Heroku. I then run ActiveRecord::Base.connection.execute ""
The data I get is correct, but the PGResult object has the data ordered in a completely random way -- i.e. my "order by" clause is totally ignored.
I am wondering why that is happening and if there is anyway to prevent that.
We are now using the same version of Postgres as Heroku and we get the same dataset. It is, however, not correctly ordered still. The order by clause is still not working properly so we are just forcing a sort using rails (since we limit the result by 10, its not a lot of work for Rails).
I'm posting this because no one seems to have a good answer and we haven't found anything that works either. Might as well let other folks know that for some reason, there seems to be an issue. Going to dig around a bit and potentially report a bug to postgresql folks.
Related
Apologies first of all if there is an answer to this elsewhere on the site. I've checked some of the proposed solutions and can't find anything appropriate.
So I've got this SSRS report that works fine when deployed but won't run locally during testing. The main query itself works when run in the query editor, as do all the sub queries that provide data for parameter drop lists but when I try to preview it, I get the error.
Bear in mind it used to work, up until the end of last year, which was when it was last updated.
I've tried removing all the tables and matrices on a copy (replacing with one very simple table), the parameters went too and I still get the error. I've also downloaded the server version, renamed it and redeployed it, works online, but not locally. As the error message is brutally vague, I've run out of ideas of things to try. Apart from switching over to PowerBI, can anyone think of anything else I could do to understand where the error is from?
Possibly relevant - the main query has some recursion in a subquery, but only a couple of levels. Could this be related? As I've said before, it used to work...
PS I'm using VS 16.7.2 from server V13.0.4466.4
PPS I also added the query to a brand new report and it errored so I think it must be something related to the SQL itself?
Is there a way to update the Liferay's site page's friendly name through a SQL script?
We generally do this in the control panel through admin user.
While #steven35's answer might do the job, you're hitting a pet peeve of mine. On a different level, you're doing it right if you're doing it on the Control Panel, or through the API and you should not think about a way to ever write to Liferay's database. It might work for the moment, but it might also fail in unforeseen ways - sometimes long after your update.
There have been enough samples for this to happen. If you're changing data while Liferay is running, the cache will not be updated. In case these values are also indexed in the search index, they won't be updated there and random later uses might not find the correct page without you reindexing everything. The same value might be stored somewhere else - or translated. Numerous conditions can fail - and there's always one condition more than you expect and cater for. That one condition might break your neck.
Granted, the friendly name of a page might not fall into the most complex of these cases, but just don't get into the habit of writing to Liferay's database. Or, if you do, don't complain about future upgrades failing or requiring extra work, because the database contains values that the API didn't expect. The problem is that during the next upgrade (if you do it in - say - one year) you'll long have forgotten that you manually changed data in the database and blame Liferay for problems during your upgrade.
Changing data is exactly what the UI and the API are for.
Friendly urls are stored in LayoutFriendlyURL.friendlyURL in your Liferay database so the following query should work
UPDATE "yourdatabase"."LayoutFriendlyURL" SET "friendlyURL"="/newurl" WHERE "layoutFriendlyURLId"=12345;
You will also need to update the Layout table accordingly to match the new friendly url.
Few customers reported than after the core data migration, their database entries result duplicated.
We opened the databases they sent us and indeed the entries are duplicated. We restore the backup and convert again the database, but we can't reproduce the issue in the office. Migration just works.
What could be the reason of this duplication? Is it related to the structure of the model, or something else?
It's a lightweight migration using model mappings. The core data databases are based on mysql.
thanks
After battling this for a while, the solution was pretty obvious for us. As it would only happen very occasionally so it was hard to find a repro (and even harder to find the reason!).
It seemed the app would sometimes crash mid-migration (for unknown reasons).
We are using deterministic file names for the destinationURL in -[NSMigrationManager migrateStoreFromURL:...] like appdata.sqlite-model_version_2.3. We weren't checking for the existence of the destination before migrating, and NSMigrationManager would copy directly into it regardless..so we'd get duplicates of every entity from the first (crashed) attempt, and singles of everything after that.
A few -[NSFileManager removeItemAtPath:error:] calls for the .sqlite, .sqlite-shm and .sqlite-wal before attempting migration to clean up any previous failed migration have solved the problem for us.
I have some code deployed on 1 out of my 6 servers. I need a splunk query that pulls data from the other 5 hosts. Something like - All except this 1 host. I know the host option in splunk to look for the host's logs, but I have no idea how to do all except 1. Can someone please assist me?
The one box I am talking about has my latest code changes, and the other 5 have my old code. So I want to write a query to do a before vs after analysis.
Looks like you have your answer, but I use an alternative method that speeds things up for me.
Within your search results, you can quickly eliminate what you want to filter out by ALT-clicking on a value in a selected field. In your case, it would add NOT host="1" to your query and immediately update your results.
I find this particularly helpful when I'm in the preliminary stage of investigating an issue, and don't have enough information to know exactly where to look first. It makes it easy to rapidly eliminate what you don't need.
*Note: This may still be broken in Splunk 6, not sure if the bug has been fixed yet: http://answers.splunk.com/answers/109473/alt-click-not-working-selected-fields
Okay, I got the answer to my question. Just use !=. So if I want the results for all my hosts except host 1, all I do is - index=blah host!="1"
I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?