How to measure test execution duration with NewRelic Syntethics Scripts? - selenium

I'm using the newrelic synthetics scripted browser to check some user transaction flows for reliability and performance.
However the tracked duration is only about 8-10s which matches initial page load time, but not the async lazy loading of a large dataset (which is approx. 40s).
How can I make new relic synthetics scripted browser tests to show actual test execution duration and not only the duration of inital request?
How can I track actual synthetics scripted browser test execution time and plot on a diagram with new relic?
The following I already tried and seems not to fix the issue:
Using a custom insight
See new relic docs for custom insights.
In script use:
var timeStart = Date.now();
var timeEnd = Date.now();
$util.insights.set('testDuration', (timeEnd - timeStart) );
with NRQL:
FROM SyntheticCheck SELECT numeric(custom.testDurration) WHERE numeric(custom.testDurration) > 0 SINCE 1 day ago
However in newrelic browser this data is shown as strings and thus cannot be plot on a chart as measurement.
What do I need to do to be able to make this a numeric measurement to be drawable as chart?
Timetracking with $har
New relic docs recommend using $har
const testTimer = $har.addResource('Test Timer');
testTimer.startTimer();
await sleep(100);
testTimer.endTimer();
However $har is resulting in error: Check failed with reference error. $har is not defined;
How do I use $har correctly?
** Checking KeySet, the custom properties show us stings**
As proposed on new relic forum I checked the field type and it appears to be registered as string
NRQL FROM SyntheticCheck SELECT keyset() returns
[
{
"results": [
{
"stringKeys": [
"custom.duration"
"custom.testDuration",
Any other ideas how to achieve this?
Thx in advance, I really appreciate your expertise!

You can log values with
$util.insights.set('testDuration', (timeEnd - timeStart) )
And when convert them within the query with
SELECT average(numeric(custom.testDuration)) FROM SyntheticCheck WHERE monitorName ='myMonitor' SINCE 7 days AGO TIMESERIES AUTO
However you cannot report this within the synthetics monitor and always need the query builder or a custom dashboard for analysis.
Would rather prefer a solution which properly reports within the synthetics monitor.

Related

How to invoke an on-demand bigquery Data transfer service?

I really liked BigQuery's Data Transfer Service. I have flat files in the exact schema sitting to be loaded into BQ. It would have been awesome to just setup DTS schedule that picked up GCS files that match a pattern and load the into BQ. I like the built in option to delete source files after copy and email in case of trouble. But the biggest bummer is that the minimum interval is 60 minutes. That is crazy. I could have lived with a 10 min delay perhaps.
So if I set up the DTS to be on demand, how can I invoke it from an API? I am thinking create a cronjob that calls it on demand every 10 mins. But I can’t figure out through the docs how to call it.
Also, what is my second best most reliable and cheapest way of moving GCS files (no ETL needed) into bq tables that match the exact schema. Should I use Cloud Scheduler, Cloud Functions, DataFlow, Cloud Run etc.
If I use Cloud Function, how can I submit all files in my GCS at time of invocation as one bq load job?
Lastly, anyone know if DTS will lower the limit to 10 mins in future?
So if I set up the DTS to be on demand, how can I invoke it from an API? I am thinking create a cronjob that calls it on demand every 10 mins. But I can’t figure out through the docs how to call it.
StartManualTransferRuns is part of the RPC library but does not have a REST API equivalent as of now. How to use that will depend on your environment. For instance, you can use the Python Client Library (docs).
As an example, I used the following code (you'll need to run pip install google-cloud-bigquery-datatransfer for the depencencies):
import time
from google.cloud import bigquery_datatransfer_v1
from google.protobuf.timestamp_pb2 import Timestamp
client = bigquery_datatransfer_v1.DataTransferServiceClient()
PROJECT_ID = 'PROJECT_ID'
TRANSFER_CONFIG_ID = '5e6...7bc' # alphanumeric ID you'll find in the UI
parent = client.project_transfer_config_path(PROJECT_ID, TRANSFER_CONFIG_ID)
start_time = bigquery_datatransfer_v1.types.Timestamp(seconds=int(time.time() + 10))
response = client.start_manual_transfer_runs(parent, requested_run_time=start_time)
print(response)
Note that you'll need to use the right Transfer Config ID and the requested_run_time has to be of type bigquery_datatransfer_v1.types.Timestamp (for which there was no example in the docs). I set a start time 10 seconds ahead of the current execution time.
You should get a response such as:
runs {
name: "projects/PROJECT_NUMBER/locations/us/transferConfigs/5e6...7bc/runs/5e5...c04"
destination_dataset_id: "DATASET_NAME"
schedule_time {
seconds: 1579358571
nanos: 922599371
}
...
data_source_id: "google_cloud_storage"
state: PENDING
params {
...
}
run_time {
seconds: 1579358581
}
user_id: 28...65
}
and the transfer is triggered as expected (nevermind the error):
Also, what is my second best most reliable and cheapest way of moving GCS files (no ETL needed) into bq tables that match the exact schema. Should I use Cloud Scheduler, Cloud Functions, DataFlow, Cloud Run etc.
With this you can set a cron job to execute your function every ten minutes. As discussed in the comments, the minimum interval is 60 minutes so it won't pick up files less than one hour old (docs).
Apart from that, this is not a very robust solution and here come into play your follow-up questions. I think these might be too broad to address in a single StackOverflow question but I would say that, for on-demand refresh, Cloud Scheduler + Cloud Functions/Cloud Run can work very well.
Dataflow would be best if you needed ETL but it has a GCS connector that can watch a file pattern (example). With this you would skip the transfer, set the watch interval and the load job triggering frequency to write the files into BigQuery. VM(s) would be running constantly in a streaming pipeline as opposed to the previous approach but a 10-minute watch period is possible.
If you have complex workflows/dependencies, Airflow has recently introduced operators to start manual runs.
If I use Cloud Function, how can I submit all files in my GCS at time of invocation as one bq load job?
You can use wildcards to match a file pattern when you create the transfer:
Also, this can be done on a file-by-file basis using Pub/Sub notifications for Cloud Storage to trigger a Cloud Function.
Lastly, anyone know if DTS will lower the limit to 10 mins in future?
There is already a Feature Request here. Feel free to star it to show your interest and receive updates
Now your can easy manual run transfer Bigquery data use RESTApi:
HTTP request
POST https://bigquerydatatransfer.googleapis.com/v1/{parent=projects/*/locations/*/transferConfigs/*}:startManualRuns
About this part > {parent=projects//locations//transferConfigs/*}, check on CONFIGURATION of your Transfer then notice part like image bellow.
Here
More here:
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/startManualRuns
following the Guillem's answer and the API updates, this is my new code:
import time
from google.cloud.bigquery import datatransfer_v1
from google.protobuf.timestamp_pb2 import Timestamp
client = datatransfer_v1.DataTransferServiceClient()
config = '34y....654'
PROJECT_ID = 'PROJECT_ID'
TRANSFER_CONFIG_ID = config
parent = client.transfer_config_path(PROJECT_ID, TRANSFER_CONFIG_ID)
start_time = Timestamp(seconds=int(time.time()))
request = datatransfer_v1.types.StartManualTransferRunsRequest(
{ "parent": parent, "requested_run_time": start_time }
)
response = client.start_manual_transfer_runs(request, timeout=360)
print(response)
For this to work, you need to know the correct TRANSFER_CONFIG_ID.
In my case, I wanted to list all the BigQuery Scheduled queries, to get a specific ID. You can do it like that :
# Put your projetID here
PROJECT_ID = 'PROJECT_ID'
from google.cloud import bigquery_datatransfer_v1
bq_transfer_client = bigquery_datatransfer_v1.DataTransferServiceClient()
parent = bq_transfer_client.project_path(PROJECT_ID)
# Iterate over all results
for element in bq_transfer_client.list_transfer_configs(parent):
# Print Display Name for each Scheduled Query
print(f'[Schedule Query Name]:\t{element.display_name}')
# Print name of all elements (it contains the ID)
print(f'[Name]:\t\t{element.name}')
# Extract the IDs:
TRANSFER_CONFIG_ID= element.name.split('/')[-1]
print(f'[TRANSFER_CONFIG_ID]:\t\t{TRANSFER_CONFIG_ID}')
# You can print the entire element for debug purposes
print(element)

runtimeservice.getVariables does not work because it can't find process instance id

I'm new to flowable and I'm trying to start a process instance with variables. params here is the Map of <String,Object> that I'm using to start the process. It all goes well, but if I try to get my variables back it tells me
"execution 22f42f67-5f88-11e9-9df0-d46d6dbfea92 doesn't exist"
But if I search for it in my process instances list, is there. This is what I do:
pi = runtimeService.startProcessInstanceById(processDefinitionId, params);
runtimeService.getVariables(pi.getId());
I'm stuck with this problem and I do not understand why it keeps doing this. What am I missing?
Flowable has the concept of RuntimeService and HistoryService. The first one contains only the runtime data (what is currently active) and the second one has all the data. The runtime data is a subset of the history data.
The reason why you can’t find the variables via the RuntimeService is due to the fact that the process is completed.
If you use the HistoryService then it would work as expected.

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

Ruby mongodb: Three newly created objects doesn't appear to exist during testing

I'm using mongodb to store some data. Then I have a function that gets the object with the latest timestamp and one with the oldest. I haven't experienced any issues during development or production with this method but when I try to implement a test for it the test fails approx 20% of the times. I'm using rspec to test this method and I'm not using mongoid or mongomapper. I create three objects with different timestamps but get a nil response since my dataset contains 0 objects. I have read a lot of articles about write_concern and that it might be the problem with "unsafe writes" but I have tried almost all the different combinations with these parameters (w, fsync, j, wtimeout) without any success. Does anyone have any idea how to solve this issue? Perhaps I have focused too much with the write_concern track and that the problems lies somewhere else.
This is the method that fetches the latest and oldest timestamp.
def first_and_last_timestamp(customer_id, system_id)
last = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::DESCENDING]).
limit(1).next()
first = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::ASCENDING]).
limit(1).next()
{ min: first["t"], max: last["t"] }
end
Im inserting data using this method where data is a json object.
def insert(customer_id, data)
collection(customer_id).insert(data)
end
I have reverted back to use the default for setting up my connection
Mongo::MongoClient.new(mongo_host, mongo_port)
I'm using the gem mongo (1.10.2). I'm not using any fancy setup for my mongo database. I've just installed mongo using brew on my mac and started it. The version of my mongo database is v2.6.1.

Hadoop Map reduce Testing - custom record reader

I have written a custom record reader and looking for sample test code to test my custom reader using MRUnit or any other testing framework. Its working fine as per the functionality but I would like to add test cases before I make an install. Any help would be appreciable.
In my opinion, a custom record reader is like any iterator. For testing my record reader I have been able to work without MRUnit or any other hadoop junit frameworks. The test executes quickly and the footprint is small too. Initialize the record reader in your test case and keep iterating on it. Here is a pseudocode from one of my tests. I can provide you more details if you want to proceed in this direction.
MyInputFormat myInputFormat = new MyInputFormat();
//configure job and provide input format configuration
Job job = Job.getInstance(conf, "test");
conf = job.getConfiguration();
// verify split type and count if you want to verify the input format also
List<InputSplit> splits = myInputFormat.getSplits(job);
TaskAttemptContext context = new TaskAttemptContextImpl(conf, new TaskAttemptID());
RecordReader<LongWritable, Text> reader = myInputFormat.createRecordReader(splits.get(1), context);
reader.initialize(splits.get(1), context);
for (; number of expected value;) {
assertTrue(reader.nextKeyValue());
// verify key and value
assertEquals(expectedLong, reader.getCurrentKey());
}