Netsuite Saved Search in Scheduled Script No Results - scripting

I have a Saved Search (SS) which yields results when run in the browser. However, when executed in code, via a Scheduled Script, there are no results.
Here's a simplified example:
The SS with ID customsearch1181 returns 10 results in the browser.
However, after executing the script below, the results array is empty.
We can assume the SS will yield less than 4k results so there's no need to run a paged search.
define(['N/search'],
(search) => {
const execute = (scriptContext) => {
const custSearch = search.load({id: 'customsearch1181'});
const results = [];
custSearch.run().each( function(result) {
results.push(result);
return true;
});
log.debug({title: 'search result count', details: results.length});
}
return {execute}
});
This script does log results for other SS IDs. One observation I've made is that there are a lot of filters on the SS under question.
Has anyone experienced this issue? What is responsible for this behavior?

Here SS is a server-side script executed as admin as user "system".
Here user-specific filters/ permissions may affect the filters.
Please test with manual trigger (Save & execute) & scheduled triggers separately.
Hope this assumption will help to find what is the issue

Related

Getting Timeout while inserting in transaction using node-mssql package sqlserver

I recently started using mssql package in my Node-Express Application for accessing the DB.
I read through various documents, including the implementation and tutorial on how to establish and use a connection. Here are the few things that I am confused upon.
Clarification
Is it a good practice to keep a connection open across the application, i.e. I have my current implementation like this.
global.sql = await mssql.connect(config, function (err) { /*LOGS*/});
And wherever, I query, I query like
function getItems(){
await sql.query`select * from tbl where val in (${values})
}
Is it the right way of doing things, or should I do it like this?
function getItems(){
const sql = mssql.connect()
await sql.query`select * from tbl where val in (${values})
}
Query:
I was going through the doc in the NPM readme.
There queries are done in 2 ways:
await sql.query`select * from mytable where id = ${value}`
2. ```
await new sql.Request().query('select 1 as number')
What is the difference between both, and which one has to be used when?
Blocker:
I am able to run a insert query by
await sql.query`insert into tbl (val1, val2) values (${item}, ${userId})`
// sql is the connection for the global variable as mentioned above
I tried creating the above mentioned query in Transaction. For that I have used this
transaction = new sql.Transaction()
await transaction.begin();
let request = new sql.Request(transaction);
await request.query(/* insert into tbl .... */)
It was working fine, but after some time, when I retried, the query started giving timeout with error Timeout: Request failed to complete in 15000ms
Can't understand why this is happening?
I tried running the same query from the sql server management studio, and it was working as expected

API call to bigquery.jobs.insert failed: Not Found: Dataset

I'm working on importing CSV files from a Google Drive, through Apps Scripts into Big Query.
BUT, when the code gets to the part where it needs to send the job to BigQuery, it states that the dataset is not found - even though the correct dataset ID is already in the code.
Very much thank you!
If you are making use of the google example code, this error that you indicate is more than a copy and paste. However, validate that you have the following:
const projectId = 'XXXXXXXX';
const datasetId = 'YYYYYYYY';
const csvFileId = '0BwzA1Orbvy5WMXFLaTR1Z1p2UDg';
try {
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
} catch (error) {
Logger.log('unable to create table');
}
according to the documentation in the link:
https://developers.google.com/apps-script/advanced/bigquery
It also validates that in the services tag you have the bigquery service enabled.

Check if Job is already in queue using Laravel 5 and Redis

I've implemented a jobs queue a few days ago and I've been experiencing problems with duplication, I'm currently working with Redis and followed the Laravel's official tutorial.
In my case, whenever someone goes to the homepage, a job is sent to the queue, lets take this example:
HomeController's index() :
public function index()
{
if(/*condition*/){
//UpdateServer being the job
$this->dispatch(new UpdateServer());
}
}
Since this task takes about 10 seconds to complete, if there's n requests to my homepage while the task is being processed, there will be n more of the same job in queue, resulting in unexpected results in my Database.
So my question is, is there any way to know if a certain job is already in queue?
I know it's an old question but I find myself coming back here again and again from Google so I wanted to give it an answer. I wanted an easy way to view the jobs in the queue inside my Laravel application on a dashboard and used the following code.
$thejobs = array();
// Get the number of jobs on the queue
$numJobs = Redis::connection()->llen('queues:default');
// Here we select details for up to 1000 jobs
$jobs = Redis::connection()->lrange('queues:default', 0, 1000);
// I wanted to clean up the data a bit
// you could use var_dump to see what it looks like before this
// var_dump($jobs);
foreach ($jobs as $job) {
// Each job here is in json format so decode to object form
$tmpdata = json_decode($job);
$data = $tmpdata->data;
// I wanted to just get the command so I stripped away App\Jobs at the start
$command = $this->get_string_between($data->command, '"App\Jobs\\', '"');
$id = $tmpdata->id;
// Could be good to see the number of attempts
$attempts = $tmpdata->attempts;
$thejobs[] = array($command, $id, $attempts);
}
// Now you can use the data and compare it or check if your job is already in queue
I don't recommend doing this, especially on page load such as the index page like the op has done. Most likely you need to rethink the way you are doing things if you need to have this code to check if a job is running.
The answer is specific to queues running Redis.
I know it's very old question, but I'm answering it for future Google users.
Since Laravel 8 there is the "Unique Jobs" feature - https://laravel.com/docs/8.x/queues#unique-jobs.
For anyone wondering why
Queue::size('queueName');
is not the same size as
Redis::llen('queues:queueName');
is because Laravel uses 3 records to count the size of a queue, so if you want the true number of jobs in the queue you must do:
Redis::lrange('queues:queueName', 0, -1);
Redis::zrange('queues:queueName:delayed', 0, -1);
Redis::zrange('queues:queueName:reserved', 0, -1);
Now you can evaluate if your desired input is in one of those queues and act according.
You can do it in jobs handle function and skip work if another same job is scheduled
public function handle()
{
$queue = \DB::table(config('queue.connections.database.table'))->orderBy('id')->get();
foreach ($queue as $job){
$payload = json_decode($job->payload,true);
if($payload['displayName'] == self::class && $job->attempts == 0){
// same job in queue, skip
return ;
}
}
// do the work
}

Chaining waterline calls with Promises

I have been hitting my head off a wall on this for the last 3 days.
I am using sailsjs & the waterline ORM that comes bundled. I want to run DB calls one after an other. I know I can do this by nesting inside "then" calls but it just looks wrong.
I have gone over the Q documentation and tutorials several times but I still don't get how to connect and fire "then" calls from existing Promises sequentially :(
I want to:
create a user
create a action
link the user & action
update the user
update the action
My code looks like
var mail = 'test#test.com';
Users.create({email:mail, name:''}).then(console.log).fail(console.log);
Actions.create({actionID:123})
.then(function(error, action){
Users.findOneByEmail(mail).then(function(person){
person.actions.add(action.id);
person.save(console.log);
}).fail(console.log)
});
Users.update({email:mail},{name:'Brian'}).exec(console.log);
Actions.update({actionID:123},{now:'running'}).exec(console.log);
As you can see from the code I've been using a mix of exec & then :P
I think the way is to connect the
Users.create(...).then -> Action.create(...).then -> Users.findOneByEmail(...).then -> *and the updates.
Huge thanks from any help
So after a day's research. I think I've cracked it.
Note: The first version I got working had the "then"s lined-up(removing the pyramid of doom) by returning the create. This allow me to call then on the next line to fire the create. http://documentup.com/kriskowal/q/#tutorial/chaining
Here's my final version
var mail = 'test#test.com';
Users.Create({email:mail,name:''})
.then(function(user){
return [Actions.create({actionID:123}),user];
}).spread(function(action, user){
user.action.add(action.id);
user.name = 'Brian';
user.save();
action.now = 'running';
action.save();
}).catch(console.error);
One of the cool things is the "spread" that allows you to line-up "Promises" and "values" to be return one they all have completed into the next "then".

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources