Softlayer API to activate storage failover - api

I'm trying to activate the failover of the storage via Softlayer API.
I wrote this code
import SoftLayer
API_USERNAME = 'xxx'
API_KEY = 'yyy'
iscsiId_primary = AAAA
iscsiId_replica = BBBB
client = SoftLayer.Client(username=API_USERNAME, api_key=API_KEY)
networkStorageService = client['SoftLayer_Network_Storage']
networkStorageService.FailoverToReplicant(id=iscsiId_primary)
The console returned me this error
SoftLayerAPIError(SoftLayer_Exception_InvalidValue): Invalid value provided for 'The replicant id provided is not part of the replication
partners associated to this volume.'.
If I try to put the replica storage ID the error is
SoftLayerAPIError(SoftLayer_Exception_Public): Replication is not supported by this storage type.
I think that the call to the failover function is incorrect. Someone could send me the correct syntax?.
Thanks a lot

According to SoftLayer_Network_Storage method, it is neccesary specify "replicantId" parameter.
Try the following in your code:
networkStorageService.FailoverToReplicant(iscsiId_replica, id=iscsiId_primary)
I hope it helps

Related

Add custom S3 endpoint for Vertica backup

I am trying to backup the Vertica cluster to a S3 like data store (supports S3 protocol) internal to my enterprise network. We have similar credentials (ACCESS KEY and SECRET KEY).
Here's how my .ini file looks like
[S3]
s3_backup_path = s3://vertica_backups
s3_backup_file_system_path = []:/vertica/backups
s3_concurrency_backup = 10
s3_concurrency_restore = 10
[Transmission]
hardLinkLocal = True
[Database]
dbName = production
dbUser = dbadmin
dbPromptForPassword = False
[Misc]
snapshotName = fullbak1
restorePointLimit = 3
objectRestoreMode = createOrReplace
passwordFile = pwdfile
enableFreeSpaceCheck = True
Where can I supply my specific endpoint? For instance, my S3 store is available on a.b.c.d:80. I have tried changing s3_backup_path = a.b.c.d:80://wms_vertica_backups but I get the error Error: Error in VBR config: Invalid s3_backup_path. Also, I have the ACCESS KEY and SECRET KEY in ~/.aws/credentials.
After going through more resources I have exported the following ENV variables VBR_BACKUP_STORAGE_ENDPOINT_URL, VBR_BACKUP_STORAGE_ACCESS_KEY_ID, VBR_BACKUP_STORAGE_SECRET_ACCESS_KEY. vbr init throws the error Error: Unable to locate credentials Init FAILED. , I'm guessing it is still trying to connect to the AWS S3 servers. (Now removed credentials from ~/.aws/credentials
I think it's worthy to add that I'm running Vertica Enterprise mode 8.1.1.
For anyone looking for something similar, the question was answered in the Vertica forum here

How to set a hardware note using the SoftLayer API

How can I use python to set the note on a Hardware_Server via the API? I've been looking through the available methods in Hardware_Server, and don't see a method for setNote.
To add notes or edit other SoftLayer_Hardware_Server properties you need to use the method SoftLayer_Hardware_Server::editObject.
Below you can see an example to add notes to the BMS with id 123456.
"""
Edit a bare metal server's basic information
This example shows how to edit the property 'notes' for a single bare metal server by
using the editObject() method in the SoftLayer_Hardware_Server API service.
See below for more details.
Important manual pages:
http://sldn.softlayer.com/reference/services/SoftLayer_Hardware_Server/editObject
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
# Your SoftLayer API username and key.
USERNAME = 'set me'
API_KEY = 'set me'
# The id of the bare metal you wish to edit
hardwareId = 123456
'''
The template object used to edit a SoftLayer_Hardware_Server.
Take account you can edit other properties by using a similar skeleton
'''
templateObject = {
'notes': 'This is my bare metal server!'
}
# Declare a new API service object
client = SoftLayer.create_client_from_env(username=USERNAME, api_key=API_KEY)
try:
# Add notes to the Bare Metal server
result = client['SoftLayer_Hardware_Server'].editObject(templateObject,
id=hardwareId)
print('Bare Metal Server edited')
except SoftLayer.SoftLayerAPIError as e:
print("Unable to edit the server: %s, %s" % (e.faultCode, e.faultString))
References:
http://sldn.softlayer.com/reference/services/SoftLayer_Hardware_Server/editObject
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Hardware_Server
I hope this help you.
Regards,

Pyspark: how to streaming Data from a given API Url

I was given an API url, and a method getUserPost() which returns the data needed for my data processing function. I am able to get the data by using Client from suds.client as follow:
from suds.client import Client
from suds.xsd.doctor import ImportDoctor, Import
url = 'url'
imp = Import('http://schemas.xmlsoap.org/soap/encoding/')
imp.filter.add('filter')
d = ImportDoctor(imp)
client = Client(url, doctor=d)
tempResult = client.service.getUserPosts(user_ids = '',date_from='2016-07-01 03:19:57', date_to='2016-08-01 03:19:57', limit=100, offset=0)
Now, each tempResult will contain 100 records. I want to stream the data from given API url to RDD for parallelized processing. However, after reading the pySpark.Streaming documentation I can't find a streaming method for customized data source. Could anyone give me an ideal how to do so?
Thank you.
After a while digging, I found out how to solve the problem. I employed the use of Kafka Streaming. Basically you need to create a producer from given API, specify topic and Port for communication. Then a consumer to listen to that specific topic and Port to start streaming the data.
Note that the Producer and Consumer must be working as different threads in order to archive real-time streaming.

Credentials Error when integrating Google Drive with

I am using Google Big Query, I want to integrate Google Big Query to Google Drive. In Big query I am giving the Google spread sheet url to upload my data It is updating well, but when I write the query in google Add-on(OWOX BI Big Query Reports):
Select * from [datasetName.TableName]
I am getting an error:
Query failed: tableUnavailable: No suitable credentials found to access Google Drive. Contact the table owner for assistance.
I just faced the same issue in a some code I was writing - it might not directly help you here since it looks like you are not responsible for the code, but it might help someone else, or you can ask the person who does write the code you're using to read this :-)
So I had to do a couple of things:
Enable the Drive API for my Google Cloud Platform project in addition to BigQuery.
Make sure that your BigQuery client is created with both the BigQuery scope AND the Drive scope.
Make sure that the Google Sheets you want BigQuery to access are shared with the "...#appspot.gserviceaccount.com" account that your Google Cloud Platform identifies itself as.
After that I was able to successfully query the Google Sheets backed tables from BigQuery in my own project.
What was previously said is right:
Make sure that your dataset in BigQuery is also shared with the Service Account you will use to authenticate.
Make sure your Federated Google Sheet is also shared with the service account.
The Drive Api should as well be active
When using the OAuthClient you need to inject both scopes for the Drive and for the BigQuery
If you are writing Python:
credentials = GoogleCredentials.get_application_default() (can't inject scopes #I didn't find a way :D at least
Build your request from scratch:
scopes = (
'https://www.googleapis.com/auth/drive.readonly', 'https://www.googleapis.com/auth/cloud-platform')
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'/client_secret.json', scopes)
http = credentials.authorize(Http())
bigquery_service = build('bigquery', 'v2', http=http)
query_request = bigquery_service.jobs()
query_data = {
'query': (
'SELECT * FROM [test.federated_sheet]')
}
query_response = query_request.query(
projectId='hello_world_project',
body=query_data).execute()
print('Query Results:')
for row in query_response['rows']:
print('\t'.join(field['v'] for field in row['f']))
This likely has the same root cause as:
BigQuery Credential Problems when Accessing Google Sheets Federated Table
Accessing federated tables in Drive requires additional OAuth scopes and your tool may only be requesting the bigquery scope. Try contacting your vendor to update their application?
If you're using pd.read_gbq() as I was, then this would be the best place to get your answer: https://github.com/pydata/pandas-gbq/issues/161#issuecomment-433993166
import pandas_gbq
import pydata_google_auth
import pydata_google_auth.cache
# Instead of get_user_credentials(), you could do default(), but that may not
# be able to get the right scopes if running on GCE or using credentials from
# the gcloud command-line tool.
credentials = pydata_google_auth.get_user_credentials(
scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/cloud-platform',
],
# Use reauth to get new credentials if you haven't used the drive scope
# before. You only have to do this once.
credentials_cache=pydata_google_auth.cache.REAUTH,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as with Google Colab.
auth_local_webserver=True,
)
sql = """SELECT state_name
FROM `my_dataset.us_states_from_google_sheets`
WHERE post_abbr LIKE 'W%'
"""
df = pandas_gbq.read_gbq(
sql,
project_id='YOUR-PROJECT-ID',
credentials=credentials,
dialect='standard',
)
print(df)

How do I test if provider credentials are valid in apache libcloud?

I was trying to create a driver for openstack using apache libcloud. It doesn't raise any error even if the user credentials are wrong. So When i checked the faq i found an answer as given in the link
Apache libcloud FAQ
But it doesn't seem to be effective since querying each time to check whether the user is authenticated will reduce the performance if the query returns a bulk of data.
When i checked the response i got from the api there is a field called driver.connection.auth_user_info and i found that the field is empty if the user is not authenticated. So can i use this method as a standard? Any help is appreciated
An openstack driver for libcloud is already available:
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
os = get_driver(Provider.OPENSTACK)
params = {'key': 'username', 'ex_force_service_region':'regionOne',
'ex_force_service_name':'nova', 'ex_force_auth_version':'2.0_password',
'ex_force_auth_url':'http://127.0.0.1:5000',
'ex_force_service_type':'compute', 'secret':'password',
'ex_tenant_name':'tenant'}
driver = os(**params)
But libcloud does not check the credentials by just creating the driver object. Instead, the creds will be validated only when a request is sent. If the internal exception InvalidCredsError is thrown the credentials are invalid, and an own variable could be set:
from libcloud.common.types import InvalidCredsError
validcreds = False
try:
nodes = driver.list_nodes()
if nodes.count >= 0:
validcreds = True
except InvalidCredsError:
print "Invalid credentials"
except Exception as e:
print str(e)
I would not rely on the internal variable auth_user_info because it could change over time.