I use pg_dump to make a copy of the database, copy1.sql.
I run an up migration to create a new instance
up: asyn (queryInterface) => {
return await queryInterface.bulkInsert('keys', [{ clientKey: 'key123' }]);
}
I run a down migration to delete the instance
down: async (queryInterface) => {
return await queryInterface.bulkDelete('keys', { clientKey: ['key123'] });
}
I do another pg_dump of the database, copy2.sql. I compare the first copy of the database with the second copy of the database to show that the down migration worked properly by running a bash script
diff "copy1.sql" "copy2.sql"
The difference is
-SELECT pg_catalog.setval('public.keys_id_seq', 6, true);
+SELECT pg_catalog.setval('public.keys_id_seq', 7, true);
This makes my test fail because the copies of both databases are not identical due to this difference. Even though I deleted that key, it's saying the next id sequence is going to start at 8 instead of 7 according to this document. The table rows that currently exists are 1 through 6. Is there a way to delete the instance so that the sequence will start at 7 instead of 8? Meaning both copies of the database should have
SELECT pg_catalog.setval('public.keys_id_seq', 6, true);
Are there options I can include? Maybe something like
down: async (queryInterface) => {
return await queryInterface.bulkDelete('keys', { clientKey: ['key123'] }, { resetIdSequence: true });
}
You can reset sequence using the truncate table command. Truncate table command erases all table data. For example:
truncate table table_name restart identity;
Second way manual resetting using setval. Example:
select setval('your_table_id_seq', 1, false);
If you don't delete all table data, then recommended set sequence value to the maximum id of records. Example:
select setval('your_table_id_seq', COALESCE((select max(id)+1 from your_table), 1), false);
I know it might be too late for you but I had the same problem and resolved it by adding {restartIdentity: true} on my migration file, like this (example of one of my tables):
async down(queryInterface, Sequelize) {
await queryInterface.dropTable('card', {restartIdentity: true});
}
To be sure it works I tried several "turns" with those commands on the terminal :
npx sequelize db:migrate, npx sequelize db:seed:all : everything is on place :)
npx sequelize db:migrate:undo:all : no tables, good!
npx sequelize db:migrate, npx sequelize db:seed:all : everything is good, foreign keys are still the good ones, great !
So for your code you could try this :
down: async (queryInterface) => {
return await queryInterface.bulkDelete('keys', { clientKey: ['key123'] }, {restartIdentity: true});
}
Hope this helps ;)
Related
I am trying to update hundreds of database records using the TypeORM library. Problem is that sometimes DUPLICATE ERR is returned from SQL when the bulk upload is performed and stops the whole operation. Is possible to set up TypeORM in a way so duplicate entries are ignored and the insert is performed?
The table is using two primary keys:
This is my insert command (TypeORM + Nestjs):
public async saveBulk(historicalPrices: IHistoricalPrice[]) {
if (!historicalPrices.length) {
return;
}
const repoPrices = historicalPrices.map((p) => this.historicalPricesRepository.create(p));
await this.historicalPricesRepository.save(repoPrices, { chunk: 200 });
}
Thanks in advance
You will have to use InsertQueryBuilder to save the entities instead of repository.save method. InsertQueryBuilder will allow you to call an additional method orIgnore() which will add IGNORE literal into your mysql INSERT statement. From mysql official doc:
When INSERT IGNORE is used, the insert operation fails silently for rows containing the unmatched value, but inserts rows that are matched.
One demerit is obviously that you'll have to now chunk the rows on your own. InsertQueryBuilder doesn't provide any options to chunk the entities. Your code should look like this:
for (let i = 0; i < historicalPrices.length; i += 200) {
const chunk = historicalPrices.slice(i, i + 200);
const targetEntity = this.historicalPricesRepository.target;
await this.historicalPricesRepository
.createQueryBuilder()
.insert()
.into(targetEntity)
.values(chunk)
.orIgnore()
.execute();
}
I am trying to add columnSummary to my table using Handsontable. But it seems that the function does not fire. The stretchH value gets set and is set properly. But it does not react to the columnSummary option:
this.$refs.hot.hotInstance.updateSettings({stretchH: 'all',columnSummary: [
{
destinationRow: 0,
destinationColumn: 2,
reversedRowCoords: true,
type: 'custom',
customFunction: function(endpoint) {
console.log("TEST");
}
}]
}, false);
I have also tried with type:'sum' without any luck.
Thanks for all help and guidance!
columnSummary cannot be changed with updateSettings: GH #3597
You can set columnSummary settings at the initialization of Handsontable.
One workaround would be to somehow manage your own column summary, since Handsontable one could give you some headeache. So you may try to add one additional row to put your arithmetic in, but it is messy (it needs fixed rows number and does not work with filtering and sorting operations. Still, it could work well under some circumstances.
In my humble opinion though, a summary column has to be fully functionnal. We then need to set our summary row out of the table data. What comes to mind is to take the above mentioned additional row and take it away from the table data "area" but it would force us to make that out of the table row always looks like it still was in the table.
So I thought that instead of having a new line we could just have to add our column summary within column header:
Here is a working JSFiddle example.
Once the Handsontable table is rendered, we need to iterate through the columns and set our column summary right in the table cell HTML content:
for(var i=0;i<tableConfig.columns.length;i++) {
var columnHeader = document.querySelectorAll('.ht_clone_top th')[i];
if(columnHeader) { // Just to be sure column header exists
var summaryColumnHeader = document.createElement('div');
summaryColumnHeader.className = 'custom-column-summary';
columnHeader.appendChild( summaryColumnHeader );
}
}
Now that our placeholders are set, we have to update them with some arithmetic results:
var printedData = hotInstance.getData();
for(var i=0;i<tableConfig.columns.length;i++) {
var summaryColumnHeader = document.querySelectorAll('.ht_clone_top th')[i].querySelector('.custom-column-summary'); // Get back our column summary for each column
if(summaryColumnHeader) {
var res = 0;
printedData.forEach(function(row) { res += row[i] }); // Count all data that are stored under that column
summaryColumnHeader.innerText = '= '+ res;
}
}
This piece of code function may be called anytime it should be:
var hotInstance = new Handsontable(/* ... */);
setMySummaryHeaderCalc(); // When Handsontable table is printed
Handsontable.hooks.add('afterFilter', function(conditionsStack) { // When Handsontable table is filtered
setMySummaryHeaderCalc();
}, hotInstance);
Feel free to comment, I could improve my answer.
I have a PDO SQL script which enables a user to complete a form which captures band information. It then posts this information to my database table called 'bands'. This works fine.
Simultaneously, I would like the script to update a different table called 'users' which has a column called 'num_bands' which needs to increase by a value of +1 if the user creates more than one band.
I have tried a number of methods, but none of them work. The script seems to be able to INSERT to the 'bands' table perfectly, but I cannot UPDATE the 'users' table. Here is the 'register_band' script:
<?php
// First we execute our common code to connection to the database and start the session
require("common.php");
// At the top of the page we check to see whether the user is logged in or not
if(empty($_SESSION['user']))
{
// If they are not, we redirect them to the login page.
header("Location: ../index.php");
// Remember that this die statement is absolutely critical. Without it,
// people can view your members-only content without logging in.
die("Redirecting to ../index.php");
}
// This if statement checks to determine whether the registration form has been submitted
// If it has, then the registration code is run, otherwise the form is displayed
if(!empty($_POST))
{
// Ensure that the user has entered a non-empty username
if(empty($_POST['username']))
{
// Note that die() is generally a terrible way of handling user errors
// like this. It is much better to display the error with the form
// and allow the user to correct their mistake. However, that is an
// exercise for you to implement yourself.
die("Please enter a username.");
}
// An INSERT query is used to add new rows to a database table.
// Again, we are using special tokens (technically called parameters) to
// protect against SQL injection attacks.
$query = "
INSERT INTO bands (
member_id,
username,
bandname,
bandhometown,
bandtype
) VALUES (
:member_id,
:username,
:bandname,
:bandhometown,
:bandtype
)
";
// Here we prepare our tokens for insertion into the SQL query. We do not
// store the original password; only the hashed version of it. We do store
// the salt (in its plaintext form; this is not a security risk).
$query_params = array(
':member_id' => $_POST['member_id'],
':username' => $_POST['username'],
':bandname' => $_POST['bandname'],
':bandhometown' => $_POST['bandhometown'],
':bandtype' => $_POST['bandtype']
);
try
{
// Execute the query to create the user
$stmt = $db->prepare($query);
$result = $stmt->execute($query_params);
}
catch(PDOException $ex)
{
// Note: On a production website, you should not output $ex->getMessage().
// It may provide an attacker with helpful information about your code.
die("Failed to run query: " . $ex->getMessage());
}
$query2 = "UPDATE users
SET num_bands = num_bands + 1
WHERE id = :member_id";
$stmt2 = $db->prepare($query2);
// This redirects the user to the private page after they register
header("Location: ../gig_view.php");
// Calling die or exit after performing a redirect using the header function
// is critical. The rest of your PHP script will continue to execute and
// will be sent to the user if you do not die or exit.
die("Redirecting to ../gig_view.php");
}
?>
I'm running this in non-production mode at the moment, so the code is not 100%. How do I get the script to UPDATE the 'users' table?
$stmt->closeCursor();
$query2 = "UPDATE users
SET num_bands = num_bands + 1
WHERE id = :member_id";
$stmt2 = $db->prepare($query2);
$params = array(':member_id' => $_POST['member_id']);
$result = $stmt2->execute($params);
The code you have here is well documented, and explains how to use PDO statements, prepared queries and how to execute them with parameters.
Just follow the same pattern as you did with your SELECT, only the string of the query is meant to change here.
I query the Big Query table from PHP script and get the result. I want to save the result in new resultant table for future need...
The option #Pentium10 mentioned works, but there is another way to do it, when you already have query results and you want to save them.
All queries in BigQuery generate output tables. If you don't specify your own destination table, the output table will be an automatically generated table that only sticks around for 24 hours. You can, however, copy that temporary table to a new destination table and it will stick around for as long as you like.
To get the destination table, you need to look up the job (if you use the jobs.query() api, the job id is in the jobReference field of the response (see here). To look up the job, you can use jobs.get() with that job id, and you'll get back the destination table information (datasetId and tableId) from the configuration.query.destinationTable (the job object is described here).
You can copy that destination table to your own table by using the jobs.insert() call with a copy configuration section filled out. Info on copying a table is here.
Passing parameters to BQ call is tricky.
This should work in more recent cloud library versions:
public function runBigQueryJobIntoTable($query, $project, $dataset, $table)
{
$bigQuery = new BigQueryClient(['projectId' => $project]);
$destinationTable = $bigQuery->dataset($dataset)->table($table);
$queryJobConfig = $bigQuery->query($query)->destinationTable($destinationTable);
$job = $bigQuery->startQuery($queryJobConfig);
$queryResults = $job->queryResults();
while (!$queryResults->isComplete()) {
sleep(1);
$queryResults->reload();
}
return true;
}
For old versions:
public function runBigQueryJobIntoTable($query, $project, $dataset, $table)
{
$bigQuery = new BigQueryClient(['projectId' => $project]);
$jobConfig = [
'destinationTable' => [
'projectId' => $project,
'datasetId' => $dataset,
'tableId' => $table
]
];
$job = $bigQuery->runQueryAsJob($query, ['jobConfig' => $jobConfig]);
$queryResults = $job->queryResults();
while (!$queryResults->isComplete()) {
sleep(1);
$queryResults->reload();
}
return true;
}
You need to set the destinationTable with the call, the results will be written to the table you set.
https://developers.google.com/bigquery/querying-data#asyncqueries
I have the following Unit Test method:
void TestOrderItemDelete()
{
using (new SessionScope())
{
var order = Order.FindById(1234);
var originalItemCount = order.OrderItems.Count;
Assert.IsTrue(originalCount > 0);
var itemToDelete = order.OrderItems[0];
itemToDelete.DeleteAndFlush(); // itemToDelete.Delete();
order.Refresh();
Assert.AreEqual(originalCount - 1, order.OrderItems.Count);
}
}
As you can see from the comment after the DeleteAndFlush command, I had to change it from a simple Delete to get the Unit test to pass. Why is this? The same is not true for my other unit test for adding an OrderItem. This works just fine:
void TestOrderItemAdd()
{
using (new SessionScope())
{
var order = Order.FindById(1234);
var originalItemCount = order.OrderItems.Count;
var itemToAdd = new OrderItem();
itemToAdd.Order = order;
itemToAdd.Create(); // Notice, this is not CreateAndFlush
order.Refresh();
Assert.AreEqual(originalCount + 1, order.OrderItems.Count);
}
}
All of this came up when I started using Lazy Instantiation of the Order.OrderItems relationship mapping, and had to add the using(new SessionScope) block around the test.
Any ideas?
This is difficult to troubleshoot without knowing the contents of your mappings, but one possibility is that you have the ID property of the OrderItem mapped using an identity field (or sequence, etc.) in the DB. If this is the case, NHibernate must make a trip to the database in order to generate the ID field, so the OrderItem is inserted immediately. This is not true of a delete, so the SQL delete statement isn't executed until session flush.