I've installed Spinnaker on AWS using the quick start guide (https://s3.amazonaws.com/quickstart-reference/spinnaker/latest/doc/spinnaker-on-the-aws-cloud.pdf). I'm now going through the bake/deploy (http://www.spinnaker.io/v1.0/docs/bake-and-deploy-pipeline) guide.
I'm trying to create a security group for my application, but there is nothing to select in the Account field. Also, the VPC field only shows None (EC2 Classic). What am I missing?
Tried this answer, but no luck: Unable to create an Application - no accounts listed in dashed rectangle beside the Accounts heading
Spinnaker follows some very specific naming conventions for VPCs. If these are existing VPCs/subnets, and you're not seeing anything in the VPC select field (assuming you're not seeing it when trying to create load balancers or server groups, too), you can add a tag to the VPC and its subnets via the AWS console with the key immutable_metadata and the value: {"purpose": "{subnet purpose}"}
The CF template from the quickstart names the vpc 'SpinnakerVPC' and the subnets SpinnakerVPC.internal.{az}. When i renamed the vpc to 'defaultvpc' and updated the subnet names to 'defaultvpc.internal.{az} and restarted i can now see my VPC's.
VPC need to have tag in the following format "{vpcName}.{subnetPurpose (e.g. "internal")}.{availabilityZone}". I am using my existing VPC and there I had to add tag "immutable_metadata = {"purpose": "{TestDeploy}"}" for VPC.
Also, make sure Subnet names are in the following format
"{VPC Name}.{external or internal}.{az}.
Make sure re-start the Spinnaker and refresh the case.
See the link for more detail information,
http://www.spinnaker.io/docs/troubleshooting-guide
Related
I was given a key which happens to be a .json file to access a bigquery data but I have no idea where to put it and how I should use it. I tried to go to the bigquery console but I can't seem to find where I can place the key to view their data. I have no experience using bigquery so I tried to search for any tutorials to no avail.
I can assume that you have created service account key with assigned roles (i.e. roles/bigquery.admin) and downloaded a JSON file that contains your key.
You will need to use it only whenever you decide to use BigQuery API by using client libraries, such as Python or Java. As you can see in the documentation, you need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the JSON file that contains your service account key to be able to access Bigquery resources.
When using the web UI in the Google Cloud Console, you don't need to use JSON key file. You only need to take care of assigning appropriate roles to the service account you have created. Please take a look for the following documentation.
Additionally, I would like to share with you the introduction to authentication, which is really important.
I hope you find the above pieces of information useful.
I'm searching for a way to switch from a test to a prod1, prod2 and prod3 environments having same structure, but the schemas have different names.
Is it possible to do it with the same analysis that will be opened later with a Spotfire web clients (Dedicated for each environments )
I found this in the Doc, but as i said i need something configurable depending on the environment
EDIT
DATASource name : DbParc,
username : DB_PARC2.
So for example when using this analysis in an env with username DB_3, the information link or Procedure element will still use DB_PARC2
Thanks 4 any Advices..
You can create a dashboard (dxp) that has its own set of Information Links and its own set of datasources objects. You need to have all those objects in one folder.
Then you can use Library Administration tool and take Copy action.
You pick the destination and confirm Copy action.
The newly created folder will contain newly created dashboard and what is essential, the dashboad and its Info Links and column objects etc (all depenedcies of the dashboard) will be connectd to the newly created/copied data sources. Now you can change the datasource specific information as per your needs.
You should be able to do this using the (SOAP) Web Services API: https://docs.tibco.com/pub/spotfire_server/7.14.0/doc/api/TIB_sfire_server_WebServices_API_Reference/index.html
There is also an Automation Services task that might be useful: https://docs.tibco.com/pub/sfire_autsvcs/7.14.0/doc/html/TIB_sfire_autsvcs_7.14.0_UserGuide/GUID-A7A5FC46-2DC1-4E70-81F4-B34476AE9221.html
I am struggling to understand how AWS API Gateway wants me to organise my APIs such that versioning is straightforward. For example, let's say I have a simple API for getting words from a dictionary, optionally filtering the results by a query parameter. I'd like to have v1 of this be available at:
https://<my-domain>/v1/names?starts-with=<value>
However, the closest I can get API Gateway is at
https://<my-domain>/names/v1?starts-with=<value>
... which is quite backwards.
What I've got in the console is "Names API" with a "v1" resource supporting a GET method. I also have my custom domain setup to map a base path of "names" to "Names API" and stage "test". The Base path must be unique so putting "v1" there is only a short-term win; once I create my second API (e.g. Numbers API) it'll have a v1, too, and I won't be able to create a second mapping.
Any and all help is greatly appreciated as I'm out of ideas now.
Do not create the version path (/v1) as a resource in your API. Instead, simply call you API "Names V1" and start creating the resources (/names). When you want to make a breaking change and create a new version of the API, we recommend you create an entirely new API called "Names V2". Once again, simply create your resources without the version path.
To bring the two APIs together, you can use custom domain names. A custom domain name in API Gateway includes both a fully qualified domain name and a base path. Create two custom domain names:
myapi.com/v1 -> points to the prod stage of the Names V1 API
myapi.com/v2 -> points to the prod stage of the Names V2 API
This way you can keep bug-fixing v1 while making changes to v2 and deploy the two APIs independently. The custom domain name will bring them together and make them appear under the same domain (myapi.com/v2/names).
Hope this helps.
How to config multi-Repositories in one Alfresco instance ?
such as in ' alfresco-global.properties' , I can config multi-Repositories locations:
dir1.root=\\server1\driver1\alf_data
...
dir2.root=\\server1\driver2\alf_data
...
dir3.root=\\server2\driver1\alf_data
And I can manage all these Repositories in this Alfresco instance.
Benefit:
1) I can manage them in one Alfresco instance.
2) I can increase my storage capacity anytime to add new Repositories.
3) improve search&index performance as there are many different Storage hard disk.
How to do that?
Also we can track this issue at Alfresco official forum
You can just add Content Stores to Alfresco, take a look here: http://wiki.alfresco.com/wiki/Content_Store_Selector or here: http://docs.alfresco.com/4.1/topic/com.alfresco.enterprise.doc/concepts/store-manage-content.html
So basically, you're adding a new store to Alfresco next to the workSpace/SpacesStore.
By adding an Aspect to a content you can move the content to the other location.
Probably you'll need to do some more stuff, but this will get you started.
Alfresco does not have a multi-repository feature. You always have one repository, but:
you can add & manage different content Stores as Tahir mentioned.
you can also use non-file-system content stores like EMC Centera, NetApp Filer...
you can also use elastic content stores like Caringo CAStor
you can enable multi tenancy mode
without extensive programming you'll always have 1 central DB & 1 central search index for now
You can do it with store selector. But I heard its been removed in 4.2 community. Yet to verify it.
This might sound a little complicated, but as I'm often working on my local databases in Lotus Notes I got the problem, that I can not authenticate. So I'm always working as Anonymous on my database.
The Problem is, that I can not test all functions, because for that I would need a valid Notesname.
How can I authenticate on localhost to work with my name/account and not as Anonymous?
You can not authenticate XPages/web applicatons using the local HTTP preview. You need to install a local server to do that (which is a good thing anyway for XPages development).
Try connecting to your machine using the fully qualified domain name, e.g. ^http://mymachine.mydomain.com instead of localhost
You can add yourself to your local address-book. And have it added to Database Security as Manager or whatever you want. That will help you to login using HTTP for local database.
I am looking to also do this, and I recalled a tip from searchdomino.com, the poster is Shawn Dezego
http://searchdomino.techtarget.com/tip/Testing-Authentication-Authorization-in-a-Web-App-Locally-WIthout-Running-a-Domino-Server
Here's the gist:
Just create any groups in your local address book and add your name to
the proper groups, roles etc. Then go to your Domains public address
book (Domino Directory), copy your person doc and paste it in your
local NAB. That's it.
This is the same basic tip as offered by the adjacent commenter. However, I think this may not work for Xpages apps, so I am loading a local server anyway.
Just create a person document in local NAB (names.nsf) and add HTTPpassword field with your password (hash it using #password("mypassword") formula) as text.
Make sure the person document contains the Fullname field, where you can put as test list your aliases. But Notes will use the first field entry as your name.
And remeber to set the first entry in canonical way (cn=user/ou=organization/o=domain)
Now you are ready to use this name in ACLs and names' (nested) groups.
I suggest to use hosts file to remap localhost with your site domain.
Enjoy!
(P.S. : You need to add anonymous entry in your db's ACL, and set it to editor access level. Once opened the application with browser, use the url command "&login" to force Notes to authenticate you)