Suppose Process A creates a temporary client certificate store, then launches Process B, passing the inherited cert store handle via some kind of inter process communication, then Process A exits. (see below)
When Process B starts running, it fetches the cert store handle and tries processing the temporary client store.
The question is: "Can a temporary cert store created in parent Process A (which exits) still be accessible by child Process B?" Thanks!
Process A
+--------------------------------------------------+
| CreateFile("certificate.pfx",...) |
| ReadFile(hFile,...) |
| Create CRYPT_DATA_BLOB |
| PFXImportCertStore(&cryptBlob,...) |
| CreateProcess(Process B hCertStore,...,TRUE,...) |
| (TRUE indicates new process inherits hCertStore) |
| Process Exits |
+--------------------------------------------------+
Process B
+--------------------------------------------------+
| Get handle hCertStore using Inter-Process Comms |
| CertFindCertificateInStore{hCertStore,...) |
| Process the temporary cert store... |
+--------------------------------------------------+
Ran a test, passing the temp cert store handle from Process A to Process B and was greeted with a crash dump:
STACK_TEXT:
00000017`7df4fcc0 00007ffa`655574c6 : 00000017`7df4fd30 00000017`7df4fd30 00000017`7df4fd10 00000000`ffffffff : crypt32!AutoResyncStore+0x10
00000017`7df4fd20 00007ff7`23c943cd : 0000016d`0dc53a2e 00000000`00000000 00007ff7`23cac7c2 00000000`0118b2d8 : crypt32!CertFindCertificateInStore+0x56
00000017`7df4fd70 0000016d`0dc53a2e : 00000000`00000000 00007ff7`23cac7c2 00000000`0118b2d8 00000000`00000000 : HelloWorld64!WinMain+0x121
00000017`7df4fd78 00000000`00000000 : 00007ff7`23cac7c2 00000000`0118b2d8 00000000`00000000 00000000`00000000 : 0x0000016d`0dc53a2e
SYMBOL_NAME: crypt32!AutoResyncStore+10
MODULE_NAME: crypt32
IMAGE_NAME: crypt32.dll
STACK_COMMAND: ~0s ; .ecxr ; kb
FAILURE_BUCKET_ID: INVALID_POINTER_READ_c0000005_crypt32.dll!AutoResyncStore
OS_VERSION: 10.0.18362.1
BUILDLAB_STR: 19h1_release
OSPLATFORM_TYPE: x64
OSNAME: Windows 10
Looks like it's not possible to access a temporary cert store using a shared handle from another process. As a workaround, I'll look into passing the PFX BLOB as follows:
Pass the PFX BLOB (i.e., the pfx file bytes) via IPC to another process.
Create a global shared named memory map for the BLOB, so any process with that
memory handle can access it and process the cert.
Thanks.
Related
I'm trying to create a new BigQuery destination on Airbyte with Octavia cli.
When launching:
octavia apply
I receive:
Error: {"message":"The provided configuration does not fulfill the specification. Errors: json schema validation failed when comparing the data to the json schema. \nErrors:
$.loading_method.method: must be a constant value Standard
Here is my conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
It was an indentation issue on my side:
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
Should be at 1 upper level (this wasn't clear in the commented template, hence the error and the possibility that others persons will do the same).
Here is full final conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
I have 6 Ignite nodes and all are connected well to form a cluster. Also, i am giving the backup copies as 2 . Now i have sent 20 data to the cluster to check the partition and data(primary and backup). I can see the count using the cache -a -r command .
Is there a command or way where i can see the actual data in each of the node, where i can see the primary data as well as the backup copies?
You could use cache -scan -c=cacheName
Entries in cache: SQL_PUBLIC_PERSON
+=============================================================================================================================================+
| Key Class | Key | Value Class | Value |
+=============================================================================================================================================+
| java.lang.Integer | 1 | o.a.i.i.binary.BinaryObjectImpl | SQL_PUBLIC_PERSON_.. [hash=357088963, NAME=Name1] |
+---------------------------------------------------------------------------------------------------------------------------------------------+
use help cache to see all cache related commands.
see: https://apacheignite-tools.readme.io/docs/command-line-interface
You also have the option of turning on SQL: https://apacheignite-sql.readme.io/docs/schema-and-indexes
and: https://apacheignite-sql.readme.io/docs/getting-started
then use JDBC/sql to see entries in your cache.
I am struggling to load most of the Drug Ontology OWL files and most of the ChEBI OWL files into GraphDB free v8.3 repository with Optimized OWL Horst reasoning on.
is this possible? Should I do something other than "be patient?"
Details:
I'm using the loadrdf offline bulk loader to populate an AWS r4.16xlarge instance with 488.0 GiB and 64 vCPUs
Over the weekend, I played around with different pool buffer sizes and found that most of these files individually load fastest with a pool buffer of 2,000 or 20,000 statements instead of the suggested 200,000. I also added -Xmx470g to the loadrdf script. Most of the OWL files would load individually in less than one hour.
Around 10 pm EDT last night, I started to load all of the files listed below simultaneously. Now it's 11 hours later, and there are still millions of statements to go. The load rate is around 70/second now. It appears that only 30% of my RAM is being used, but the CPU load is consistently around 60.
are there websites that document other people doing something of this scale?
should I be using a different reasoning configuration? I chose this configuration as it was the fastest loading OWL configuration, based on my experiments over the weekend. I think I will need to look for relationships that go beyond rdfs:subClassOf.
Files I'm trying to load:
+-------------+------------+---------------------+
| bytes | statements | file |
+-------------+------------+---------------------+
| 471,265,716 | 4,268,532 | chebi.owl |
| 61,529 | 451 | chebi-disjoints.owl |
| 82,449 | 1,076 | chebi-proteins.owl |
| 10,237,338 | 135,369 | dron-chebi.owl |
| 2,374 | 16 | dron-full.owl |
| 170,896 | 2,257 | dron-hand.owl |
| 140,434,070 | 1,986,609 | dron-ingredient.owl |
| 2,391 | 16 | dron-lite.owl |
| 234,853,064 | 2,495,144 | dron-ndc.owl |
| 4,970 | 28 | dron-pro.owl |
| 37,198,480 | 301,031 | dron-rxnorm.owl |
| 137,507 | 1,228 | dron-upper.owl |
+-------------+------------+---------------------+
#MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.
http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html
Just typing out #Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.
All of these queries also require the following prefix definition
prefix sys: <http://www.ontotext.com/owlim/system#>
This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.
Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.
SELECT ?state ?ruleset {
?state sys:listRulesets ?ruleset
}
Add a predefined ruleset
INSERT DATA {
_:b sys:addRuleset "rdfsplus-optimized"
}
Make the new ruleset the default
INSERT DATA {
_:b sys:defaultRuleset "rdfsplus-optimized"
}
Re-infer... could take a long time!
INSERT DATA {
[] <http://www.ontotext.com/owlim/system#reinfer> []
}
I have existing selenium tests written in Robot IDE Framework that I'm trying to run in Sauce Labs.
I'm using the sample test from this tutorial to see if I can get at least one test running. http://datakurre.pandala.org/2014/03/cross-browser-selenium-testing-with.html
The test passes locally, and passes all the tests on Sauce Labs, but then times out and gives and error, "Test did not see a new command for 90 seconds. Timing out.
error" because it's not disconnecting Remote Web Driver.
I've tried all of these, together and separately at the end of the "Close test browser" function:
Close all browsers
Process close
Stop selenium server
I've also tried adding ((RemoteWebDriver) getCurrentWebDriver()).quit() in one of the python functions that runs during the closing process. I'm new to Selenium and Robot Framework, so I'm not sure how to grab the Remote Web Driver.
Here is the code, in case that helps:
*** Settings ***
Test Setup Open test browser
Test Teardown Close test browser
Resource ../../Keywords/super.txt
Library Selenium2Library
Library ../../Library/SauceLabs.py
*** Variables ***
${LOGIN_FAIL_MSG} Incorrect username or password.
${COMMAND_EXECUTOR} http://username:key#ondemand.saucelabs.com:80/wd/hub
${REMOTE_URL} http://username:key#ondemand.saucelabs.com:80/wd/hub
${DESIRED_CAPABILITIES} username:name,access-key:key,name:Testing RobotFramework,platform:Windows 8.1,version:26,browserName:CHROME,javascriptEnabled:True
*** Test Cases ***
Incorrect username or password
[Tags] Login
Go to https://saucelabs.com/login
Page should contain element id=username
Page should contain element id=password
Input text id=username anonymous
Input text id=password secret
Click button id=submit
Page should contain ${LOGIN_FAIL_MSG}
[Teardown]
*** Keywords ***
Open test browser
Open browser http://www.google.com ${BROWSER} \ remote_url=${REMOTE_URL} desired_capabilities=${DESIRED_CAPABILITIES}
Close test browser
Run keyword if '${REMOTE_URL}' != '' Report Sauce status ${SUITE_NAME} | ${TEST_NAME} ${TEST_STATUS} ${TEST_TAGS} ${REMOTE_URL}
Close all browsers
Process close
Stop selenium server
You shouldn't need to do anything special to close down the connection. My guess is, there's something in your test that is preventing the browser from being closed. My recommendation is to start with a simpler example, and start from the command line. Get that working, and then work your way up to being able to run something more complex from within RIDE.
Here is a working example where I removed all of the extra stuff in the test. I am able to run this both from the command line and via RIDE on Windows. You'll have to add in your own key, however:
*** Settings ***
| Library | Selenium2Library
*** Variables ***
| #{_tmp}
| ... | name:Testing RobotFramework Selenium2Library,
| ... | browserName:internet explorer,
| ... | platform:Windows 8,
| ... | version:10
| ${CAPABILITIES} | ${EMPTY.join(${_tmp})}
| ${KEY} | <put your username:key here>
| ${REMOTE_URL} | http://${KEY}#ondemand.saucelabs.com:80/wd/hub
| ${URL} | https://saucelabs.com/login
| ${LOGIN_FAIL_MSG} | Incorrect username or password.
*** Test cases ***
| Example of connecting to saucelabs via robot
| | [Setup]
| | ... | Open Browser
| | ... | ${URL}
| | ... | remote_url=${REMOTE_URL}
| | ... | desired_capabilities=${CAPABILITIES}
| |
| | Page should contain element | id=username
| | Page should contain element | id=password
| |
| | Input text | id=username | anonymous
| | Input text | id=password | secret
| | Click button | id=submit
| |
| | Page should contain | ${LOGIN_FAIL_MSG}
| |
| | [Teardown] | Close all browsers
How can I read a file from a specific byte position in Robot Framework?
Let's say I have a process running for a long time writing a long log file. I want to get the current file size, then I execute something that affects the behaviour of the process and I wait until some message appears in the log file. I want to read only the portion of the file starting from the previous file size.
I am new to Robot Framework. I think this is a very common scenario, but I haven't found how to do it.
There are no built-in keywords to do this, but writing one in python is pretty simple.
For example, create a file named "readmore.py" with the following:
from robot.libraries.BuiltIn import BuiltIn
class readmore(object):
ROBOT_LIBRARY_SCOPE = "TEST SUITE"
def __init__(self):
self.fp = {}
def read_more(self, path):
# if we don't already know about this file,
# set the file pointer to zero
if path not in self.fp:
BuiltIn().log("setting fp to zero", "DEBUG")
self.fp[path] = 0
# open the file, move the pointer to the stored
# position, read the file, and reset the pointer
with open(path) as f:
BuiltIn().log("seeking to %s" % self.fp[path], "DEBUG")
f.seek(self.fp[path])
data = f.read()
self.fp[path] = f.tell()
BuiltIn().log("resetting fp to %s" % self.fp[path], "DEBUG")
return data
You can then use it like this:
*** Settings ***
| Library | readmore.py
| Library | OperatingSystem
*** test cases ***
| Example of "tail-like" reading of a file
| | # read the current contents of the file
| | ${original}= | read more | /tmp/junk.txt
| | # do something to add more data to the file
| | Append to file | /tmp/junk.txt | this is new content\n
| | # read the new data
| | ${new}= | Read more | /tmp/junk.txt
| | Should be equal | ${new.strip()} | this is new content