Postgres Trigger in specified time - sql

Hello expert I am working in banking software vendor company and we are working in PostgreSQL database due to load of work in office time(pick hour) I want to execute some function in specified time(off hour) with trigger. So please if you have any idea please help me.

A trigger should always be something fast. You don't want to hold transactions open for a couple of hours, that would be a really bad idea.
The correct solution for such a problem would be a queue. There are some existing implementations like PGQ, but I don't know if the'll meet your requirements.

Related

SAP B1 DI-API replaces character on save

I wrote a small c# service importing tarcking numbers to a single UDF separated by a , (comma). The problem is that occasionally (maybe every 200th document) a comma is saved as a semi-colon. A kind of similar issue I have is with the Amazon importer where I add a comment. Maybe with the same frequency, the comment has a whitespace between every single original characters. All in common is that the error cannot be within my code. There is no difference between the correct documents (ca. 95%) and the others.
Does anybody have an idea how i can workaround that these issues don't appear anymore?
Or why this can happen?
I know I have an outdated SAP B1 at version 9.2 PL 10 Hotfix3. DI-API is linked to the install folder. Is this issue fixed in any later version?
(Current workaround is a cron job checking for wrong entries in the db and update those documents. Very uncool)
Definitely sounds like a DI-API bug. If you posted your code it would help confirm this.
Assuming it IS a DI-API bug, I would "dark side" it and just do a regular SQL update (bypass the DI-API), since it's just a UDF and there's probably not any business logic you need SAP to perform on these updates.
Alternatively, you could normalize your data and create a separate table linked via FK to your current table to house a single UDF per row (therefore not having to deal with the weird coma character issue).
As a third alternative, you could make of the SBO Post-Transaction Notification SP to monitor for your error case and perform the "fix" there, intead of in your cron-job.
Disclaimer: I have not worked with SAP in 4+ years.

Bug in google cloud bigquery?

I observed something very strange today when trying to steam records into bigquery table , sometimes after the successful stream, it shows all the records being steamed into, something it only shows part of it? What I did was I deleted the table, and recreated it. Has anyone encountered any scenario like this? I am seriously concerned.
Many thanks.
Regards,
I've experienced a similar issue after deleting and recreating the table in a short time span, which is part of our e2e testing plan. As long as you do not delete/recreate your table streaming API works great. In our case workaround was to customize streaming table suffix for e2e execution only.
I am not sure it this was addressed or not, but I would expect constant improvement.
I've also created a test project reproducing the issue and shared it with BigQuery team.

SQL Anywhere 12 Log Searching

We have an ERP Program used to create and manage stock / orders. Somehow an order has vanished - this should not be possible. It should be possible to cancel an unwanted order, but never delete it completely.
The order in question was created, printed and sent to a customer - and then disappeared. I know the Primary key and Table info, and want to search the log to see if this was somehow deleted, or perhaps there was a rollback.
How can I translate/search the log in this way?
Please note: I did not write this program, and its not my job to fix it.
I just need to diagnose the issue and contact the SW Vendor, if required, and have them fix it. As such I cannot post any code.
With so little information it is hard to give a definitive answer.
I'd start by searching the regular logs. If you have some kind of audit trail mechanism that would be a great help!
If a search through the regular logs doesn't find you the answer then I would:
Get a copy of the database
Go through the REDO logs using the appropriate DBA tools. Since I'm not an sqlanywhere DBA I would get help from one.
When I found the place in time where the order was deleted I would find any other information I could get. The user that did the commit or users that where logged on at the time (I don't know exactly what kind of information you can get here). Also, go back to the other logs you may have and check around that time stamp.
To learn exactly how to go through the redo logs of an SQL Anywhere database you should first try your google luck and then ask in Database Administrators.
Solved!!!!
The Sybase Central tool has an option (which I couldn't find in the manual and missed the first time I looked), which can translate a log file into a series of statements and create a *.SQL file.
Tools -> SQL Anywhere -> Translate Log File -> Follow wizard (which hopefully for you is in a language that you speak, for me it was not).

Big Query table too fragmented - unable to rectify

I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?

Cleaning up my SQL code

So I have a site that is now receiving 30k unique hits a day, and at this moment during the day of peak hours I am getting a lot of 'error establishing database connection' (it's a WP install if you haven't guessed).
Here is a quote from my host:
The error that you included in your
support request seems to indicate that
something is wrong with the code and
the SQL query is not formatted
properly. You would need to dig into
that. We do not offer support for
third-party applications or custom
code.
Is there some "easy way" of pinpointing bad formatted/programmed code via Firebug or some FF extension? I am trying to find any other possibility than going through line by line and making this a five year plan.
Any help would be greatly appreciated.
If this only happens during peak hours it's probably got nothing to do with the SQL query. Probably the system is not responding fast enough.