API call to bigquery.jobs.insert failed: Not Found: Dataset - google-bigquery

I'm working on importing CSV files from a Google Drive, through Apps Scripts into Big Query.
BUT, when the code gets to the part where it needs to send the job to BigQuery, it states that the dataset is not found - even though the correct dataset ID is already in the code.
Very much thank you!

If you are making use of the google example code, this error that you indicate is more than a copy and paste. However, validate that you have the following:
const projectId = 'XXXXXXXX';
const datasetId = 'YYYYYYYY';
const csvFileId = '0BwzA1Orbvy5WMXFLaTR1Z1p2UDg';
try {
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
} catch (error) {
Logger.log('unable to create table');
}
according to the documentation in the link:
https://developers.google.com/apps-script/advanced/bigquery
It also validates that in the services tag you have the bigquery service enabled.

Related

Redis: Cannot perform full-text search operations

I got error saying "ReplyError: Car:index: no such index". I am indexing by "description". As you can see on the terminal part contains all the needed info and I am not sure what is causing the problem. Help is welcome.
I've been trying to follow Fireship tutorial on Redis, copied his code https://fireship.io/lessons/redis-nextjs/
In Redis-om version 0.3.6, you can add following code in your API:
export async function createIndex() {
await connect();
const repository = client.fetchRepository(schema);
await repository.createIndex();
}
And Also ensure you only create index once ONLY.
It will work after you created an index.
As Guy Royse mentioned in the comments - The problem is solved by calling createIndex API.

BigQuery: Table name missing dataset while no default dataset is set in the request

Here is the issue with a BigQuery query.
I know this query is missing a dataset name, so the error "Table name "my_table" missing dataset while no default dataset is set in the request."
select * from my_table;
Changing 'my_tabale' to 'my_dataset.my_table' will fix the issue.
But can somebody help me with setting a default dataset.
The error message clearly giving an indication that BigQuery has such an option.
Depending on which API you are using, you can specify the defaultDataset parameter when running your BigQuery job. More information for the jobs.query api can be found here https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query.
For example, using the NodeJS API for createQueryJob https://googleapis.dev/nodejs/bigquery/latest/BigQuery.html#createQueryJob, you can do something similar to this:
const options = {
keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS,
projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID,
defaultDataset: {
datasetId: process.env.BIGQUERY_DATASET_ID,
projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID
},
query: `select * from my_table;`
}
const [job] = await bigquery.createQueryJob(options);
let [rows] = await job.getQueryResults();
The short answer is: SET ##dataset_id = 'whatever';
But, this is a scripting command, so you will have to enclose your SQL query in a BEGIN ... END block. Note the semicolon. Scripting gives you some added flexibility. On the other hand, it prevents the console from doing that fun thing where it estimates the processing cost.

How to move a record from one model to another in Google App Maker

I have created 2 SQL models in Google App Maker. For simplicity sake lets say Model 1 has all of the information that can be added and edited for each of the records. Model 2 works as a storage model where once a record in Model 1 is removed it moves over to Model 2. The idea is that the individual can click on a "removed" boolean which will open a dialog page to add in comments for the removal and once complete the record will be moved to Model 2 for storage and will no longer be visible in Model 1.
Is there any way to do this? If you need more information let me know and I will try to provide it but the reason I cannot post the existing app is because the information is confidential.
Thanks for you help!
Updated answer: move to another model
If you want to enforce users to enter a message, you need to forbid them to delete records through datasources:
// onBeforeDelete model event
throw new Error('You should provide message prior deleting a record');
Then you need to implement audit itself:
// server script
function archive(itemKey, message) {
if (!message) {
throw new Error('Message is required');
}
var record = app.models.MyModel.getRecord(itemKey);
if (!record) {
throw new Error('Record was not found');
}
var archive = app.models.Removed.newRecord();
archive.Field1 = record.Field1;
archive.Field2 = record.Field2;
...
archive.Message = message;
app.saveRecords([archive]);
app.deleteRecords([record]);
}
// client script
google.script.run
.withSuccessHandler(function() {
// TODO
})
.withFailureHandler(function() {
// TODO
})
.archive(itemKey, message);
If you need to implement auditing for multiple/all models then you can generalize the snippet by passing model's name and using Model Metadata: funciton archive(modelName, itemKey, message) {}
Original answer: move to another DB
Normally I would recommend just to add and set a boolean field Deleted to the model and ensure that records marked as deleted are not sent to the client. Implementation of moving data between databases could be tricky since transactions are not supported across multiple databases.
If you desperately want to make your app more complex and less reliable you can create record's backup in onBeforeDelete model event using JDBC Apps Script service (External Database Sample could be your friend to start with):
// onBeforeDelete model event
var connection = Jdbc.getConnection(dbUrl, user, userPassword);
var statement = connection.prepareStatement('INSERT INTO ' + TABLE_NAME +
' (Field1, Field2, ...) values (?, ?, ...)');
statement.setString(1, record.Field1);
statement.setString(2, record.Field2);
...
statement.execute();
Why do you need JDBC? Because App Maker natively doesn't support models attached to different databases.
I was able to do what I needed using a query filter as a client script This keeps the data on the back end when i export and only shows the active user whatever is not removed.
var datasource1 = app.datasources.WatchList_Data;
datasource1.query.filters.Remove_from_WatchList._equals = 'No';
datasource1.load();

BigQuery: How to autoreload table with new storage JSON files?

I have just created one BigQuery table by linking available JSON files in Google Cloud Storage. But I do not see any option to auto-reload table rows with new files added in Google Cloud Storage folder or bucket.
Currently, I have to go to BigQuery console and then delete & recreate the same table to load new files. But this solution is not scalable for us because we run a cron job on BigQuery API. How to auto-reload data in BigQuery?
Thanks
When you define External Table on top of Files in Google Cloud Storage - you can use wildcard for Source Location, so your table will represent all files that match
Then, when you query such table - you can use _file_name field which will "tell" you which file given row came from
SELECT
_file_name AS file,
*
FROM `yourTable`
This way - whenever you add new file in GCS - you will get it in table "automatically"
With Google Cloud Functions you can automate BigQuery each time you receive a new file:
Create a new function at https://console.cloud.google.com/functions/add
Point "bucket" to the one receiving files.
Codewise, import BigQuery inside package.json:
{
"dependencies": {
"#google-cloud/bigquery": "^0.9.6"
}
}
And on index.js you can act on the new file in any appropriate way:
var bigQuery = BigQuery({ projectId: 'your-project-id' });
exports.processFile = (event, callback) => {
console.log('Processing: ' + JSON.stringify(event.data));
query(event.data);
callback();
};
var BigQuery = require('#google-cloud/bigquery');
function query(data) {
const filename = data.name.split('/').pop();
const full_filename = `gs://${data.bucket}/${data.name}`
// if you want to run a query:
query = '...'
bigQuery.query({
query: query,
useLegacySql: false
});
};

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources