Hi I am trying to use PyEZ to to create an automation script.
My goal is save response from bgp summary with logical system in a variable
This one works:
bgpinfo= cor1.rpc.get_bgp_summary_information
but I want to get the bgp summary for logical system based on this juniper command:
user#COR1> show bgp summary logical-system EXTERNAL
Check whether the below line of code works:
bgpinfo= cor1.rpc.get_bgp_summary_information(logical_system='EXTERNAL')
I figured it by giving the below command on the device:
reg#vj1> show bgp summary logical-system EXTERNAL | display xml rpc
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/18.2D0/junos">
<rpc>
<get-bgp-summary-information>
<logical-system>EXTERNAL</logical-system>
</get-bgp-summary-information>
</rpc>
<cli>
<banner></banner>
</cli>
</rpc-reply>
Related
Do we have an API in Qtest that can provide summary of test cycle execution ?
E.g. Passed: 23 Failed: 7 Unexecuted: 10 Running: 2
Need this data for generating report in our consolidated reporting tool along with data from some other sources.
Nothing that gives exactly what you ask for, but you could use the API calls below to create it yourself.
You can get the status of all test runs in a project using
GET /api/v3/projects/{projectId}/test-runs/execution-statuses
Or, to get results from a specific test cycle, first find all the test runs in that cycle using
/api/v3/projects/{projectId}/test-runs?parentId={testCycleID}&parentType=test-cycle
(append &expand=descendants to find test runs in containers under the test cycle)
and then get the results of each run individually using
/api/v3/projects/{projectId}/test-runs/{testRunId}/test-logs/last-run
See https://qtest.dev.tricentis.com/
I have 2 nodes of a cluster receiving messages from iothub. I split their responsibility by partition. Node 1 reads from partitions 1,3,5,7,9 and the other 2,4,6,8, and 0. Recently, my partition 8 stops responding until I stop my code and restart it. It seems like a device is sending a message that locks up the partition. What I want to do is list all devices in my partition 8. Is that possible? Is there a cloud shell command to get those devices in a list?
Not sure this will help you, but you can see the partition on the incoming messages. For example you could use Azure Stream Analytics to see the partitions using this query:
Select GetMetadataPropertyValue(IoTHub, '[IoTHub].[ConnectionDeviceId]') as DeviceId, partitionId
from IoTHub
Also, if you run locally in VisualStudio it will tell you which device is sending malformed JSON. eg.
[Warning] 10/21/2021 9:12:54 AM : User Warning Source 'IoTHub' had 1 occurrences of kind 'InputDeserializerError.InvalidData' between processing times '2021-10-21T15:12:50.5076449Z' and '2021-10-21T15:12:50.5712076Z'. Could not deserialize the input event(s) from resource 'Partition: [1], Offset: [455266583232], SequenceNumber: [634800], DeviceId: [DeviceName]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format
Also check your "Activity Log" blade in the ASA job. It may have more details for you.
I am getting the following error in a pipeline that has Copy activity with Rest API as source and Azure Data Lake Storage Gen 2 as Sink.
"message": "Failure happened on 'Sink' side. ErrorCode=AdlsGen2OperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ADLS Gen2 operation failed for: Operation returned an invalid status code 'Conflict'. Account: '{Storage Account Name}'. FileSystem: '{Container Name}'. Path: 'foodics_v2/Burgerizzr/transactional/_567a2g7a/2018-02-09/raw/inventory-transactions.json'. ErrorCode: 'LeaseAlreadyPresent'. Message: 'There is already a lease present.'. RequestId: 'd27f1a3d-d01f-0003-28fb-400303000000'..,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.Azure.Storage.Data.Models.ErrorSchemaException,Message=Operation returned an invalid status code 'Conflict',Source=Microsoft.DataTransfer.ClientLibrary,'",
The pipeline runs in a for loop with Batch size = 5. When I make it sequential, the error goes away, but I need to run it in parallel.
This is known issue with adf limitation variable thread parallel running.
You probably trying to rename filename using variable.
Your option is to run another child looping after each variable execution.
i.e. variable -> Execute Pipeline
enter image description here
or
remove those variable, hard coded those variable expression in azure activity.
enter image description here
Hope this helps
When doing "DBT run" I get the following error
{{ config(materialized='table') }}
SELECT customer_id FROM `hello-data-pipeline.adwords.google_ads_campaign_stats`
I am making sure that my FROM location contains 3 parts
A project (hello-data-pipeline)
A database (adwords)
A table (google_ads_campaign_stats)
But I get the following error
15:41:51 | 2 of 3 START table model staging_benjamin.yo......................... [RUN]
15:41:51 | 2 of 3 ERROR creating table model staging_benjamin.yo................ [ERROR in
0.32s]
Runtime Error in model yo (models/yo.sql)
404 Not found: Dataset hello-data-pipeline:staging_benjamin was not found in location EU
NB. Bigquery does not show any error when doing this query in Bigquery Editor.
NB 2 DBT does not show any error when "running sql" command directly in the script editor
What I am doing wrong ?
You may need to specify a location where your query will run. Queries that run in a specific location may only reference data in that location. You may choose auto-select to run the query in the location where the data resides.
Read more about Dataset locations
OK I found. I needed to specify the location in the profile.yml file.
=> https://docs.getdbt.com/reference/warehouse-profiles/bigquery-profile/#dataset-locations
In DBT cloud you will find it when setting up your project
I had a similar error to your 'hello-data-pipeline:staging_benjamin was not found in location EU'
However, my issue was not that the dataset was not in the incorrect location. If was that DBT was not targeting the schema I wanted.
e.g. for your example it would be that hello-data-pipeline:staging_benjamin would actually not be the target schema you initially wanted.
Adding this bit of code on top of my query solved the issue.
{{ config(schema='marketing') }}
select ...
cf DBT's schemas: https://docs.getdbt.com/docs/building-a-dbt-project/building-models/using-custom-schemas
here is another doc that helped me understand why this was happening:
"dbt Cloud IDE: The values are defined by your connection and credentials. To check any of these values, head to your account (via your profile image in the top right hand corner), and select the project under "Credentials".
https://docs.getdbt.com/reference/dbt-jinja-functions/target
here i have got one issue.can some one please help me to resolve this.
i was trying to extract some data to DS 0FI_AP_6...
then in InfoPackage Monitor I can see like..
-->Requests (messages): Everything OK
-->Extraction (messages): Everything OK
-->Transfer (IDocs and TRFC): Missing messages or warnings
-->Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Data Package 1 : 23752 Records arrived in BW
Data Package 2 : 15216 Records arrived in BW
Request IDoc : Application document posted
Info IDoc 1 : Application document posted
Info IDoc 3 : Application document posted
Info IDoc 4 : Application document posted
-->Processing (data packet): Everything OK
Data Package 1 ( 38672 Records ) : Everything OK
in Status Menu I am having message like...
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not
return any errors. The data transport from the source system to BI was
probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path
"Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In
particular, check the remote user authorizations in BI.
Please suggest me how to resolve this issue...
thanks in advance for your help and quick reply is much appreciated.
But what the worst thing is I deleted the infopackage in PSA by mistake.
In the normal case, if I repeat the process again, the delta load would be OK, but now the delta load remains error.
so gurus,
1. how can I restart my delta loading correctly?
2. I want to modify the timestamp in the delta table, but how to do it ?
Go to T-Code RSA7 in the source system. This will tell you the date/timestamp that the delta is set to. If the date was changed to a range that no longer works then you will need to re-initialize the datasource in the BW system side. However, the Delta date may still be fine becauase it may have never been changed when you tried to first do your load because of the connection issues.
You can create a new infopackage and set the update to Initialize Datasource with Data Transfer. This will essentially run a full load from the datasource and then reset the delta pointer date/timestamp to when you ran it. This way you will capture all the data that you needed and anything that was already in the PSA should be overwritten.
Also note that you should delete or set the request status to red on the previous request that may contain bad data in the PSA.
From the original error it seems like you are having an RFC connection issue between the datasource and BW. Contact your BASIS support and have them check the connection to make sure it is good. To ensure that your datasource is extracting properly you can run t-code RSA3 on it in the source system. This will ensure that the extraction of data is working properly.