I would like to know how to get the Email Configuration from D2 using D2FS. I need the information that I set within the D2 Config that I set in Tools >> Email.
Thanks.
From what i researched, there is no D2FS service (D2 4.1.0) that allows me to retrieve this information. What i can make is to use DFC or DFS to read instances of d2_mail_config.
Related
I have a lot of free-style jobs in my Jenkins instance. I create them with Jenkins API (generate XML-file with configuration and post them by "http://my-jenkins-instance:8080/createItem?name=JobName").
There is one problem - I can not generate value in secret fields. For example, I want such a config:
Inject passwords to the build as environment variables -> Job passwords.
And I need to set 123 to Password field.
I can not do this through XML because it appears decoded in XML. Something like this: {AQAAABAAAANwHq0hsSF6...}
I want to set the value to this parameter
So my questions are:
Can I get the decoded value of a plain password through some API? So I could send 123 and get {AQAAABAAAANwHq0hsSF6...} back.
If not, can I set secret value some other way? I can only think of using Selenium but it is too slow (comparing to API).
I have found the solution.
I can set the value as a plain text: <value>123</value>, create or update a job. Then I need to disable and enable the job.
I've try to do the following instruction of this document : LINK
I used SAS authentication and added this to request header "x-ms-rename-source" but i kept getting this error "403-AuthorizationPermissionMismatch". Doing fine with all others api method but this one seem really tricky. Does anyone have success rename a file or directory with this one ?
Instead of using SAS authentication, i used authorization headers. You can check it here.
My request headers :
DateTime now = DateTime.UtcNow;
requestMessage.Headers.Add("x-ms-date", now.ToString("R", CultureInfo.InvariantCulture));
requestMessage.Headers.Add("x-ms-version", "2018-11-09");
//your source path you want to rename
requestMessage.Headers.Add("x-ms-rename-source", renameSourcePath);
//rename operation only accept authorize by shared key via header
requestMessage.Headers.Authorization = AzureStorageAuthenticationHelper.GetAuthorizationHeader(
StorageGen2AccountName, StorageGen2AccountKey, now, requestMessage);
You can try to rename the file in Blob Storage by using Storage Explorer tool
Kindly let us know if the above helps or you need further assistance on this issue.
I am using Google Big Query, I want to integrate Google Big Query to Google Drive. In Big query I am giving the Google spread sheet url to upload my data It is updating well, but when I write the query in google Add-on(OWOX BI Big Query Reports):
Select * from [datasetName.TableName]
I am getting an error:
Query failed: tableUnavailable: No suitable credentials found to access Google Drive. Contact the table owner for assistance.
I just faced the same issue in a some code I was writing - it might not directly help you here since it looks like you are not responsible for the code, but it might help someone else, or you can ask the person who does write the code you're using to read this :-)
So I had to do a couple of things:
Enable the Drive API for my Google Cloud Platform project in addition to BigQuery.
Make sure that your BigQuery client is created with both the BigQuery scope AND the Drive scope.
Make sure that the Google Sheets you want BigQuery to access are shared with the "...#appspot.gserviceaccount.com" account that your Google Cloud Platform identifies itself as.
After that I was able to successfully query the Google Sheets backed tables from BigQuery in my own project.
What was previously said is right:
Make sure that your dataset in BigQuery is also shared with the Service Account you will use to authenticate.
Make sure your Federated Google Sheet is also shared with the service account.
The Drive Api should as well be active
When using the OAuthClient you need to inject both scopes for the Drive and for the BigQuery
If you are writing Python:
credentials = GoogleCredentials.get_application_default() (can't inject scopes #I didn't find a way :D at least
Build your request from scratch:
scopes = (
'https://www.googleapis.com/auth/drive.readonly', 'https://www.googleapis.com/auth/cloud-platform')
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'/client_secret.json', scopes)
http = credentials.authorize(Http())
bigquery_service = build('bigquery', 'v2', http=http)
query_request = bigquery_service.jobs()
query_data = {
'query': (
'SELECT * FROM [test.federated_sheet]')
}
query_response = query_request.query(
projectId='hello_world_project',
body=query_data).execute()
print('Query Results:')
for row in query_response['rows']:
print('\t'.join(field['v'] for field in row['f']))
This likely has the same root cause as:
BigQuery Credential Problems when Accessing Google Sheets Federated Table
Accessing federated tables in Drive requires additional OAuth scopes and your tool may only be requesting the bigquery scope. Try contacting your vendor to update their application?
If you're using pd.read_gbq() as I was, then this would be the best place to get your answer: https://github.com/pydata/pandas-gbq/issues/161#issuecomment-433993166
import pandas_gbq
import pydata_google_auth
import pydata_google_auth.cache
# Instead of get_user_credentials(), you could do default(), but that may not
# be able to get the right scopes if running on GCE or using credentials from
# the gcloud command-line tool.
credentials = pydata_google_auth.get_user_credentials(
scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/cloud-platform',
],
# Use reauth to get new credentials if you haven't used the drive scope
# before. You only have to do this once.
credentials_cache=pydata_google_auth.cache.REAUTH,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as with Google Colab.
auth_local_webserver=True,
)
sql = """SELECT state_name
FROM `my_dataset.us_states_from_google_sheets`
WHERE post_abbr LIKE 'W%'
"""
df = pandas_gbq.read_gbq(
sql,
project_id='YOUR-PROJECT-ID',
credentials=credentials,
dialect='standard',
)
print(df)
I'm searching for a way to let people open Trac ticket by email.
The only solution I've found so far is email2trac | https://oss.trac.surfsara.nl/email2trac/wiki The problem with this solution is that I don't want to install & setup a mailserver. I would like a less invasive solution.
I was thinking about a cron script that download messages from a POP3 account and open/update tickets by parsing the content.
Is this possible ?
I was thinking about a cron script that download messages from a POP3
account and open/update tickets by parsing the content. Is this
possible ?
I think it would be possible yes. Certainly once you had the data from a POP3 account, you could iterate over it and create/update tickets as appropriate with the Trac API.
For the data retrieval step, you could create a new plugin, with a Component which implements the IAdminCommandProvider interface. How you actually retrieve and parse the data is an implementation detail for you to decide, but you could probably use the email/poplib modules and follow some of the parsing structure from email2trac.
For some untested boilerplate to get you started...
from trac.admin import IAdminCommandProvider
from trac.core import Component, implements
from trac.ticket import Ticket
def EmailToTicket(Component):
implements(IAdminCommandProvider)
def get_admin_commands(self):
yield ('emailtoticket retrieve',
'Retrieve emails from a mail server.'
None, self._do_retrieve_email)
def _do_retrieve_email(self):
# TODO - log into the mail server, then parse data.
# It would be nice to have a tuple of dictionaries,
# with keys like id, summary, description etc
# iterate over the data and create/update tickets
for email in emails:
if 'id' in email: # assuming email is a dictionary
self._update_ticket(email)
else:
self._create_ticket(email)
def _update_ticket(self, data):
ticket = Ticket(self.env, data[id])
for field, value in data.iteritems():
ticket[field] = value
ticket.save_changes(author, comment, when)
def _create_ticket(self, data):
ticket = Ticket(self.env)
for field, value in data.iteritems():
ticket[field] = value
ticket.insert()
You could then have Cron tab execute this command via TracAdmin (the frequency is up to you - the below example runs every minute)
* * * * * trac-admin /path/to/projenv emailtoticket retrieve
The find out more about plugin development, you should read this Trac wiki page.
I am using jmeter to test a php application. I need to create a different thread with a unique session for each user. Because in my application you can only have one login per user at a time so putting 100 times the same user I will not get to any conclusion.
I have created 40 users user0,user1....user39 with the same password is there a way to automatically create simultaneous threads for each of them?
Thanks
I just implemented this using jmeter for an app that uses Spring Security (It would be very similar to PHP). This is fairly straightforward, basically:
1) Create a new CSV file using a text editor
Ex: CSVSample_user.csv
username1, password1
username2, password2
2) In jmeter, create a CSV Data Set Config element
Thread Group>add>Config Element>CSV Data Set Config
=> Assign variable names (see image)
3) Create an HTTP Request element
Thread Group>add>Sampler>HTTP Request
=> Create a POST with parameters, have the variable you created
put the values for the parameter. (See bottom image).
NOTE: There are other elements you need, such as cookie manager, etc. Also number of threads needs to be set to the number of login users.
You can use a CSV Data Set Config. This control will allow you to use an external source of variables.
Add -> Config Element -> CSV Data Set Config
You must set the variable names, something like:
Variable Names (comma-delimited): USERNAME,PASSWORD
Then you can use the variables in your HTTP Requests parameters like:
${USERNAME} and ${PASSWORD}
I realize this question is over a year old, but I just came across the same issue and thought I'd add my solution for anyone else who stumbles upon this issue.
If you have a sequence of usernames and passwords that are simply differentiated by numbers at the end of their values, you can use the __threadNum variable to log them in. So for the value of username you might say user${__threadNum}.
This solution is simpler than including a csv but only works where you have a list such as the one you suggested in your question.
keep csv file and testplan (i.e jmx) in a same folder and recheck the variable name in CSV datasetconfig and http request for any typing error.