How do I set Data Source password from environment variable in DataGrip? - sql

To connect to DB I have to make an API call to generate a token. Lets say I store this in environment variable $TOKEN.
Now while setting up my data source in DataGrip, how can I tell DataGrip to read $TOKEN environment variable as its value will keep on changing? Because before opening DataGrip I will make the API call to generate the token and set in a environment variable via script.
Is it possible to read environment variable as a password in DataGrip?

There is no such feature out of the box.
You can create your custom plugin to provide this kind of authorisation. That is the matter of implementing of on class - com.intellij.database.dataSource.DatabaseAuthProvider
See this plugin as an example.

Related

how to keep properties file outside the mule code in mulesoft

i have defined a dev.properties file for the mule flow.where i am passing the username and password required to run the flow.This password gets updated everymonth.So everymonth i have to deploy the code to the server after changing the password.Is there a way , where we can keep the properties file outside the code in mule server path.and change it when required in order to avoid redeployment.
One more idea is to completely discard any usage of a file to pickup the username and password.
Instead try using a credentials providing service, like a http requestor which is collecting the username and password from an independent API(child API/providing service).
Store it in a cache object-store of your parent API (the calling API). Keep using those values, unless the flow using them fails or if the client needs to expire them after a month. Later simply refresh them.
You can trigger your credentials providing service using a scheduler with a Cron expression having Monthly Triggers.
No, because even if the properties file is outside the application, properties are loaded on application deployment. So you would need to restart the application anyway to pick up the new values.
Instead you can create a custom module that read the properties from somewhere (a file, some service, etc), assign the value to a variable, and use the variable instead at execution time. Note that some configurations may only be set at deployment time, so variables will not be evaluated as such.
If the credentials are not exposing your application security or data, then you can move them to another config file(place it Outside mule app path). Generate a RAML file which will read & reload the credentials after application deploy/start-up, and store them in cache with timeToLive around 12 hours.
The next time when you have to change Username/Password, change in the file directly and cache will refresh it automatically after expiry time.
Actually not because all the properties secure properties needs to be there at runtime and is it is not there your application will get failed,
There is one way but it’s not best one, instead of editing code you can directly edit secure property I.e username and password in your case directly in cloudhub runtime manager properties tab.
After editing just apply changes then api will restart automatically and will deploy successfully

PostgREST error on connecting in AWS using secrets

Currently deploying PostgREST in AWS. When I use Fargate and just hardcoded type in the environment variables for the connection string, the machine works like a charm.
However I recently replaced these values with secrets. In the secret I copy-pasted the entire string in the value and in the environment variable I set the source from "Value" to "ValueFrom".
So the value now is:
postgres://<myuser>:<mypass>#<amazon-rds-instance>:5432/<db>
When I use this connectionstring directly in the environment variable I can easily connect, so I know the information is correct.
The logs come back with the following error:
{"details":"missing \"=\" after \"{\"postgrest_db_connection\":\"postgres://myuser:mypass#amazon-rds-instance:5432/db\"}\" in connection info string\n","code":"","message":"Database connection error"}
I also checked I have no characters in the string that need to be escaped. What can I be missing here?
So I figured it out. Unfortunately this line was it:
It is only supported to inject the full contents of a secret as an environment variable. Specifying a specific JSON key or version is not supported at this time.
This means that whenever you use the secrets as ValueFrom setting in the environment variables (when working with Fargate), the entire secret's value gets copy-pasted.
I tested this using a secret for the PostgREST schema variable. I got back the value:
{'PGRST_SCHEMA_URL': 'public'}
Whilst I was expecting it to be just:
public
This is why the configuration went bad as well. Thanks everyone for searching.

Is it possible to set the environment variable "GOOGLE_APPLICATION_CREDENTIALS" to an uploaded JWT File in Flowground?

I try to use the "google-api-nodejs-client" (https://github.com/googleapis/google-api-nodejs-client) with a JSON Web Token in a flowground connector implementation. Is there a possibility to get the environment variable "GOOGLE_APPLICATION_CREDENTIALS" point to a configurable JWT file that the user can upload into a flow?
Example of client usage from the library page:
// This method looks for the GCLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS
// environment variables.
const auth = new google.auth.GoogleAuth({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
Lets see if I understand correctly what you want to do:
create a flow that can be triggered from outside and accesses any Google API via google-api-nodejs-client module.
every time you trigger the flow you will post a valid JWT for accessing any Google API
you want to store the JWT in the local file-system; the mentioned environment variables contains the path to the persisted JWT.
Generally spoken this is a valid approach for the moment.
You can create a file in the local file-system:
fs.writeFile(process.env.HOME + '/jwt.token', ...)
Sebastian already explained how to define the needed environment variables.
Please keep in mind that writing and reading the JWT file must take place in the same step of flow execution. There is no persistence of this file after finishing execution of this step.
Why is this a valid approach for the moment only?
I assume that we will prevent writing in the local file-system in the near future. This will prevent the described solution as well.
From my point of view the better solution would be using the OAuth2 mechanism build in flowground.
For more information regarding this approach
https://github.com/googleapis/google-api-nodejs-client#oauth2-client
https://doc.flowground.net/getting-started/credential.html
You can set environment variables in flowground following on the "ENV vars" page for your connector:

How to use properties from external file in automatically run SOAP UI tests?

I want to keep my Web Service username and password separately from SOAP UI Test XML file.
So, I save username and password as custom properties in external file called properties.xml.
But the problem is that after I manually import the property values (defined on the project level) and save the test, the values are added to the test XML file distinctly.
So, anybody who will open the test XML file after me, will be able to read the username and password in my property values. Which I do not want.
Inside the test XML file it looks like this:
<con:name>USERNAME</con:name><con:value>!MYUSERNAMEVALUE</con:value></con:property><con:property><con:name>PASSWORD</con:name><con:value>!MYPASSWORD</con:value>
Can I use reference to my username and password through the external file properties.xml, while automatically running the test, but not show values in test XML?
I thought this configuration would work:
<con:name>USERN</con:name><con:value>${projectDir\properties#USERNAME}</con:value></con:property><con:property><con:name>PASSWORD</con:name><con:value>${projectDir\properties#PASSWORD}</con:value>
or this one:
<con:name>USERN</con:name><con:value>${projectDir\properties.xml#USERNAME}</con:value></con:property><con:property><con:name>PASSWORD</con:name><con:value>${projectDir\properties.xml#PASSWORD}</con:value>
But they are not resolving the property values correctly.
I don't think you can use external files that way.
Either you add a groovy step that will extract the username and password from your file and then you make your webservice point to those recovered values,
Or, when using the testRunner (I guess you do so for automatically running the tests), you use the -P option that will set your values as Project custom properties. In that case, in your webService, you just have to point to those project properties.
example.
in your web service, set your username as ${#Project#username} and your password as ${#Project#password} and when you launch the testRunner you add the following options:
-Pusername=myUserName and -Ppassword=myPassword
refer to testRunner command-line arguments

Configuration for PowerShell module created via .NET framework

What's the best practice when you have dependencies that you want to be able to configure when creating a PowerShell module in C#?
My specific scenario is that the PowerShell module I am creating via C# code will use a WCF service. Hence, the service's URL must be something that the clients can configure.
Is there a standard approach on this? Or will this be something that must be custom implemented?
A somewhat standard way to do this is to allow a value to be provided as a parameter or default to reading special variable via PSCmdlet's GetVariableValue. This is what the built-in Send-MailMessage cmdlet does. It reads the variable PSEmailServer if no server is provided.
I might not be understanding your question. So I'll posit a few scenarios:
You PS module will always use the same WCF endpoint. In that case you could hardcode the URL in the module
You have a limited number of endpoints to choose from, and there's some algorithm or best practice to associate an endpoint with a particular user, such as the closest geographically, based on the dept or division the user is in, etc.
It's completely up to the end user's preference to choose a URL.
For case #2, I suggest you implement the algorithm/best practice and save the result someplace - as part of the module install.
For case #3, using an environment variable seems reasonable, or a registry setting, or a file in one of the user's profile directories. Probably more important than where you persist the data though, is the interface you give users to change the setting. For example if you used an environment variable, it would be less friendly to tell the user to go to Control Panel, System, Advanced, Environment, User variable, New..., than to provide a simple PS function to change the URL. In fact I'd say providing a cmdlet/function to perform configuration is the closest to a "standard" I can think of.