bigquery nodejs client delete dataset error - google-bigquery

I'm creating a temporary dataset in bigquery with nodejs client. When I finished my tasks I want to delete this dataset. I'm doing it fairly simple following the code in bigquery documentation.
// Creates a reference to the existing dataset
const dataset = bigquery.dataset(datasetId);
// Deletes the dataset
dataset
.delete()
.then(() => {
console.log(`Dataset ${dataset.id} deleted.`);
})
.catch(err => {
console.error('ERROR:', err);
});
I'm receiving an error: Dataset xxx:5a58b519a3192fa942c57918 is still in use

To help prevent you from unintentionally deleting a dataset that still contains data, you will receive an error unless you first delete and tables and views that it contains. Once you have deleted the tables and views, you will be able to delete the dataset without this error.

Related

expo-sqlite using existing local database

I am using expo and react native to build a truth or dare app. I want to store hundreds of truth or dare questions to feed to the user. I figured SQLite would be most efficient for this (and allow offline usage). I created the db using the DB Browser (SQLite) tool and created a single table named "Prompts" with several rows.
Here's the code I use for opening and performing a transation:
import database from "../assets/db/TruthOrDareDB.db"
const db = SQLite.openDatabase(database);
console.log(db);
db.transaction((tx) => {
console.log("transaction test");
tx.executeSql(
`
SELECT *
FROM Prompts;`,
[],
(_, result) => console.log("executeSql"),
(transaction, error) => console.log(error)
);
});
The openDatabase call returns a webSQLDatabase obj. I recieve the "transaction" log to the console but I do not get the "executeSql" log or an error. I would expect to get at least one, why am I not?
And as far as design do you agree that SQLite is the best for my goal?

API call to bigquery.jobs.insert failed: Not Found: Dataset

I'm working on importing CSV files from a Google Drive, through Apps Scripts into Big Query.
BUT, when the code gets to the part where it needs to send the job to BigQuery, it states that the dataset is not found - even though the correct dataset ID is already in the code.
Very much thank you!
If you are making use of the google example code, this error that you indicate is more than a copy and paste. However, validate that you have the following:
const projectId = 'XXXXXXXX';
const datasetId = 'YYYYYYYY';
const csvFileId = '0BwzA1Orbvy5WMXFLaTR1Z1p2UDg';
try {
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
} catch (error) {
Logger.log('unable to create table');
}
according to the documentation in the link:
https://developers.google.com/apps-script/advanced/bigquery
It also validates that in the services tag you have the bigquery service enabled.

Netsuite Saved Search in Scheduled Script No Results

I have a Saved Search (SS) which yields results when run in the browser. However, when executed in code, via a Scheduled Script, there are no results.
Here's a simplified example:
The SS with ID customsearch1181 returns 10 results in the browser.
However, after executing the script below, the results array is empty.
We can assume the SS will yield less than 4k results so there's no need to run a paged search.
define(['N/search'],
(search) => {
const execute = (scriptContext) => {
const custSearch = search.load({id: 'customsearch1181'});
const results = [];
custSearch.run().each( function(result) {
results.push(result);
return true;
});
log.debug({title: 'search result count', details: results.length});
}
return {execute}
});
This script does log results for other SS IDs. One observation I've made is that there are a lot of filters on the SS under question.
Has anyone experienced this issue? What is responsible for this behavior?
Here SS is a server-side script executed as admin as user "system".
Here user-specific filters/ permissions may affect the filters.
Please test with manual trigger (Save & execute) & scheduled triggers separately.
Hope this assumption will help to find what is the issue

How to move a record from one model to another in Google App Maker

I have created 2 SQL models in Google App Maker. For simplicity sake lets say Model 1 has all of the information that can be added and edited for each of the records. Model 2 works as a storage model where once a record in Model 1 is removed it moves over to Model 2. The idea is that the individual can click on a "removed" boolean which will open a dialog page to add in comments for the removal and once complete the record will be moved to Model 2 for storage and will no longer be visible in Model 1.
Is there any way to do this? If you need more information let me know and I will try to provide it but the reason I cannot post the existing app is because the information is confidential.
Thanks for you help!
Updated answer: move to another model
If you want to enforce users to enter a message, you need to forbid them to delete records through datasources:
// onBeforeDelete model event
throw new Error('You should provide message prior deleting a record');
Then you need to implement audit itself:
// server script
function archive(itemKey, message) {
if (!message) {
throw new Error('Message is required');
}
var record = app.models.MyModel.getRecord(itemKey);
if (!record) {
throw new Error('Record was not found');
}
var archive = app.models.Removed.newRecord();
archive.Field1 = record.Field1;
archive.Field2 = record.Field2;
...
archive.Message = message;
app.saveRecords([archive]);
app.deleteRecords([record]);
}
// client script
google.script.run
.withSuccessHandler(function() {
// TODO
})
.withFailureHandler(function() {
// TODO
})
.archive(itemKey, message);
If you need to implement auditing for multiple/all models then you can generalize the snippet by passing model's name and using Model Metadata: funciton archive(modelName, itemKey, message) {}
Original answer: move to another DB
Normally I would recommend just to add and set a boolean field Deleted to the model and ensure that records marked as deleted are not sent to the client. Implementation of moving data between databases could be tricky since transactions are not supported across multiple databases.
If you desperately want to make your app more complex and less reliable you can create record's backup in onBeforeDelete model event using JDBC Apps Script service (External Database Sample could be your friend to start with):
// onBeforeDelete model event
var connection = Jdbc.getConnection(dbUrl, user, userPassword);
var statement = connection.prepareStatement('INSERT INTO ' + TABLE_NAME +
' (Field1, Field2, ...) values (?, ?, ...)');
statement.setString(1, record.Field1);
statement.setString(2, record.Field2);
...
statement.execute();
Why do you need JDBC? Because App Maker natively doesn't support models attached to different databases.
I was able to do what I needed using a query filter as a client script This keeps the data on the back end when i export and only shows the active user whatever is not removed.
var datasource1 = app.datasources.WatchList_Data;
datasource1.query.filters.Remove_from_WatchList._equals = 'No';
datasource1.load();

BigQuery: How to autoreload table with new storage JSON files?

I have just created one BigQuery table by linking available JSON files in Google Cloud Storage. But I do not see any option to auto-reload table rows with new files added in Google Cloud Storage folder or bucket.
Currently, I have to go to BigQuery console and then delete & recreate the same table to load new files. But this solution is not scalable for us because we run a cron job on BigQuery API. How to auto-reload data in BigQuery?
Thanks
When you define External Table on top of Files in Google Cloud Storage - you can use wildcard for Source Location, so your table will represent all files that match
Then, when you query such table - you can use _file_name field which will "tell" you which file given row came from
SELECT
_file_name AS file,
*
FROM `yourTable`
This way - whenever you add new file in GCS - you will get it in table "automatically"
With Google Cloud Functions you can automate BigQuery each time you receive a new file:
Create a new function at https://console.cloud.google.com/functions/add
Point "bucket" to the one receiving files.
Codewise, import BigQuery inside package.json:
{
"dependencies": {
"#google-cloud/bigquery": "^0.9.6"
}
}
And on index.js you can act on the new file in any appropriate way:
var bigQuery = BigQuery({ projectId: 'your-project-id' });
exports.processFile = (event, callback) => {
console.log('Processing: ' + JSON.stringify(event.data));
query(event.data);
callback();
};
var BigQuery = require('#google-cloud/bigquery');
function query(data) {
const filename = data.name.split('/').pop();
const full_filename = `gs://${data.bucket}/${data.name}`
// if you want to run a query:
query = '...'
bigQuery.query({
query: query,
useLegacySql: false
});
};