Get the JDBC Providers for the Cell using wsadmin - jython

I am trying to list the jdbcprovider list at cell scope but it also list the jdbcproviders at node and server scope, how to get rid off the providers at node and server scope from the list?
AdminConfig.list('JDBCProvider', AdminConfig.getid( '/Cell:CellV70A/'))
output:
'"DB2 Universal JDBC Driver Provider(cells/CellV70A/nodes/nodename|resources.xml#JDBCProvider_1302300228086)"\n"DB2 Universal JDBC Driver Provider(cells/CellV70A|resources.xml#JDBCProvider_1263590015775)"\n"WebSphere embedded ConnectJDBC driver for MS SQL Server(cells/CellV70A|resources.xml#JDBCProvider_1272027151294)"'

If you look at the help for the AdminConfig.list command:
wsadmin>print AdminConfig.help('list')
WASX7056I: Method: list
...
Method: list
Arguments: type, scope
Description: Lists all the configuration objects of the type named
by "type" within the scope of the configuration object named by "scope."
...
It says "within the scope". Since node and server-scoped JDBCProviders are within the scope of the cell, they are returned by your command. If you list all JDBCProviders at cell scope using the Admin Console and then look at the Command Assistance, you'll see something like:
Note that scripting list commands may generate more information than is displayed by the administrative console because the console generally filters with respect to scope, templates, and built-in entries. AdminConfig.list('JDBCProvider', AdminConfig.getid('/Cell:MyCell/'))
So you'll need to filter your return list similarly. You could throw together a very simple script to do so:
jdbcProviders = AdminConfig.list('JDBCProvider', AdminConfig.getid('/Cell:MyCell')).split('\r\n')
for jdbcProvider in jdbcProviders:
if "/nodes/" or "/servers/" in jdbcProvider:
continue
print jdbcProvider

Related

ANSYS Mechanical Workbench Scripting - Accessing Parameters

Ansys gurus,
My project is a static structural analysis using ANSYS workbench mechanical. I have created the parametrized geometry (via Design modeler) and material property in workbench, and used ACT scripting to configure the model. However, I don't find too much information on how to access the parameters via ACT scripting.
I have confirmed that the geometric parameters are successfully created in the workbench, e.g.
ID
Paramater Name
Value
Unit
P1
diameter
50
um
The documentation LINK suggests that I can obtain parameter ID using Analysis.GetParameter(), however, the following code didn't work for me and resulted in the error as below.
Code:
STATIC_STRUCTURAL = ExtAPI.DataModel.AnalysisByName("Static Structural")
HEIGHT = STATIC_STRUCTURAL.GetParameter('height')
Error:
Property not found.
Do you have any suggestions on the cause of such error, is it because the Parameters were not imported from workbench "project schematic" to "Model", or the code I tried to retrieve the parameters was incorrect. In either cases, could you advise the correct method to access the parameters? Thank you!
hawkoli1987
If you want to access a parameter from the "project schematic" page you can create a list. If you than want to do something with this inside of mechanical, you have to send the commands to your model:
# Access the geometric parameters
allParameters = Parameters.GetAllParameters()
for parameter in allParameters:
print parameter.DisplayText
if parameter.DisplayText == 'height':
heigthParameter = parameter
# Loop over all systems in the project
for system in GetAllSystems():
# Get Model Container
model = system.GetContainer(ComponentName="Model")
# edit model component in batch mode
system.Refresh()
model.Edit(Interactive=True)
# code to be sent to ansys mechanical
cmd ='''
here goes your ACT script as string. You have to make sure, that there are no leading spaces or tabs.
'''
# send code and exit mechanical
model.SendCommand(Language='Python',Command=cmd)
model.Exit()
print "Finished script execution."

How to run multiple gremlin commands as a single transaction?

In Amazon Neptune I would like to run multiple Gremlin commands in Java as a single transactions. The document says that tx.commit() and tx.rollback() is not supported. It suggests this - Multiple statements separated by a semicolon (;) or a newline character (\n) are included in a single transaction.
Example from the document show that Gremlin is supported in Java but I don't understand how to "Multiple statements separated by a semicolon"
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
// Add a vertex.
// Note that a Gremlin terminal step, e.g. next(), is required to make a request to the remote server.
// The full list of Gremlin terminal steps is at https://tinkerpop.apache.org/docs/current/reference/#terminal-steps
g.addV("Person").property("Name", "Justin").next();
// Add a vertex with a user-supplied ID.
g.addV("Custom Label").property(T.id, "CustomId1").property("name", "Custom id vertex 1").next();
g.addV("Custom Label").property(T.id, "CustomId2").property("name", "Custom id vertex 2").next();
g.addE("Edge Label").from(g.V("CustomId1")).to(g.V("CustomId2")).next();
The doc you are referring is for using the "string" mode for query submission. In your approach you are using the "bytecode" mode by using the remote instance of the graph traversal source (the "g" object). Instead you should submit a string script via the client object
Client client = gremlinCluster.connect();
client.submit("g.V()...iterate(); g.V()...iterate(); g.V()...");
Gremlin sessions
Java Example
After getting the cluster object,
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit(query1);
client.submit(query2);
.
.
.
client.submit(query3);
client.close();
When you run .close() all the mutations get committed.
You can also capture the response from the Query reference.
List<Result> results = client.submit(query);
results.stream()...
You can also use the SessionedClient, which will run all queries in the same transaction upon close().
More information is here: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html#access-graph-gremlin-sessions-glv

Salt: Pass parameters to custom module executed inside a pillar

I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox

How to use insert_job

I want to run a Bigquery SQL query using insert method.
I ran the following code just like so:
JobConfigurationQuery = Google::Apis::BigqueryV2::JobConfigurationQuery
bq = Google::Apis::BigqueryV2::BigqueryService.new
scopes = [Google::Apis::BigqueryV2::AUTH_BIGQUERY]
bq.authorization = Google::Auth.get_application_default(scopes)
bq.authorization.fetch_access_token!
query_config = {query: "select colA from [dataset.table]"}
qr = JobConfigurationQuery.new(configuration:{query: query_config})
bq.insert_job(projectId, qr)
and I got an error as below:
Caught error invalid: Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0:
Please let me know how to use the insert_job method.
I'm not sure what client library you're using, but insert_job probably takes a JobConfiguration. You should create one of those and set the query parameter to equal your JobConfigurationQuery you've created.
This is necessary because you can insert various jobs (load, copy, extract) with different types of configurations to this one API method, and they all take a single configuration type with a subfield that specifies which type and details about the job to insert.
More info from BigQuery's documentation:
jobs.insert documentation
job resource: note the "configuration" field and its "query" subfield

MsTest, DataSourceAttribute - how to get it working with a runtime generated file?

for some test I need to run a data driven test with a configuration that is generated (via reflection) in the ClassInitialize method (by using reflection). I tried out everything, but I just can not get the data source properly set up.
The test takes a list of classes in a csv file (one line per class) and then will test that the mappings to the database work out well (i.e. try to get one item from the database for every entity, which will throw an exception when the table structure does not match).
The testmethod is:
[DataSource(
"Microsoft.VisualStudio.TestTools.DataSource.CSV",
"|DataDirectory|\\EntityMappingsTests.Types.csv",
"EntityMappingsTests.Types#csv",
DataAccessMethod.Sequential)
]
[TestMethod()]
public void TestMappings () {
Obviously the file is EntityMappingsTests.Types.csv. It should be in the DataDirectory.
Now, in the Initialize method (marked with ClassInitialize) I put that together and then try to write it.
WHERE should I write it to? WHERE IS THE DataDirectory?
I tried:
File.WriteAllText(context.TestDeploymentDir + "\\EntityMappingsTests.Types.csv", types.ToString());
File.WriteAllText("EntityMappingsTests.Types.csv", types.ToString());
Both result in "the unit test adapter failed to connect to the data source or read the data". More exact:
Error details: The Microsoft Jet database engine could not find the
object 'EntityMappingsTests.Types.csv'. Make sure the object exists
and that you spell its name and the path name correctly.
So where should I put that file?
I also tried just writing it to the current directory and taking out the DataDirectory part - same result. Sadly, there is limited debugging support here.
Please use the ProcessMonitor tool from technet.microsoft.com/en-us/sysinternals/bb896645. Put a filter on MSTest.exe or the associate qtagent32.exe and find out what locations it is trying to load from and at what point in time in the test loading process. Then please provide an update on those details here .
After you add the CSV file to your VS project, you need to open the properties for it. Set the Property "Copy To Output Directory" to "Copy Always". The DataDirectory defaults to the location of the compiled executable, which runs from the output directory so it will find it there.